Skip to main content

Full text of "The Matrix Cookbook"

See other formats


The Matrix Cookbook 

[ http://matrixcookbook.com ] 


Kaare Brandt Petersen 
Michael Syskind Pedersen 

Version: November 15, 2012 


1 



Introduction 


What is this? These pages are a collection of facts (identities, approxima- 
tions, inequalities, relations, ...) about matrices and matters relating to them. 
It is collected in this form for the convenience of anyone who wants a quick 
desktop reference . 

Disclaimer: The identities, approximations and relations presented here were 

obviously not invented but collected, borrowed and copied from a large amount 
of sources. These sources include similar but shorter notes found on the internet 
and appendices in books - see the references for a full list. 

Errors: Very likely there are errors, typos, and mistakes for which we apolo- 

gize and would be grateful to receive corrections at cookbook@2302.dk. 

Its ongoing: The project of keeping a large repository of relations involving 

matrices is naturally ongoing and the version will be apparent from the date in 
the header. 

Suggestions: Your suggestion for additional content or elaboration of some 

topics is most welcome acookbook@2302.dk. 

Keywords: Matrix algebra, matrix relations, matrix identities, derivative of 

determinant, derivative of inverse matrix, differentiate a matrix. 

Acknowledgements: We would like to thank the following for contributions 

and suggestions: Bill Baxter, Brian Templeton, Christian Rislipj, Christian 
Schroppel, Dan Boley, Douglas L. Theobald, Esben Hoegh-Rasmussen, Evripiclis 
Karseras, Georg Martius, Glynne Casteel, Jan Larsen, Jun Bin Gao, Jurgen 
Struckmeier, Kamil Dedecius, Karim T. Abou-Moustafa, Korbinian Strimmer, 
Lars Christiansen, Lars Kai Hansen, Leland Wilkinson, Liguo He, Loic Thibaut, 
Markus Froeb, Michael Hubatka, Miguel Barao, Ole Winther, Pavel Sakov, 
Stephan Hattinger, Troels Pedersen, Vasile Sima, Vincent Rabaud, Zhaoshui 
He. We would also like thank The Oticon Foundation for funding our PhD 
studies. 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 2 



CONTENTS 


CONTENTS 


Contents 


II Basics! 6 

11.1 Tracel 6 

11.2 Determinant] 6 

1.3 The Special Case 2x2 7 


12 Derivatives! 8 

2.1 Derivatives of a Determinant! 8 

2.2 Derivatives of an Inversel 9 

2.3 Derivatives of Eigenvalues 10 

2.4 Derivatives of Matrices, Vectors and Scalar Forms| 10 

2.5 Derivatives of Traces! 12 

2.6 Derivatives of vector normsl 14 

2.7 Derivatives of matrix norms! 14 

2.8 Derivatives of Structured Matricesl 14 


13 Inversesl 17 

3.1 Basic! 17 

3.2 Exact Relations! 18 

3.3 Implication on Inversesl 20 

3.4 Approximations! 20 

3.5 Generalized Inversel 21 

3.6 Pseudo Inversel 21 



Complex Matrices 


4.1 

Complex Derivatives 


4.2 

Higher order and non-linear derivatives 


4.3 

Inverse of complex sum 


5 

Solutions and Decompositions 

28 


5.1 Solutions to linear equations 

28 


5.2 Eigenvalues and Eigenvectors 

30 


5.3 Singular Value Decomposition! 

31 


5.4 Triangular Decomposition 

32 


5.5 LU decomposition^ 

32 


5.6 LDM decomposition 

33 


5.7 LDL decompositions 

33 


I 6 

Statistics and Probability! 


6.1 Definition ofMomentsT^ 


6.2 Expectation of Linear Combinations 


6.3 Weighted Scalar Variable 


34 

34 

35 

36 


17 Multivariate Distributions! 


37 


7.1 

Cauchy 

37 

7.2 

Dirichletl 

37 

7.3 

Normal! 

37 

7.4 

Normal-Inverse Gamma| 

37 

7.5 

Gaussian| 

37 

7.6 

Multinomial! 

37 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 3 


CONTENTS 


CONTENTS 


17.7 Student’s tl 37 

17.8 Wishartl 38 

7.9 Wishart, Inverse 39 


18 Gaussiansl 40 

18.1 Basicsl 40 

18.2 Momentsl 42 

18.3 Miscellaneous! 44 

18.4 Mixture of Gaussiansl 44 


9 Special Matrices 46 

9.1 Block matriccsl 46 

9.2 Discrete Fourier Transform Matrix, Tlie| 47 

9.3 Hermitian Matrices and skew-Hermitianl 48 

9.4 Idempotent Matrices 49 

9.5 Orthogonal matrices | 49 

9.6 Positive Definite and Semi-definite Matricesl 50 

9.7 Singleentry Matrix, The| 52 

9.8 Symmetric, Skew-symmetric/ Antisymmetric 54 

9.9 Toeplitz Matrices! 54 

9.10 Transition matricesl 55 

9.11 Units, Permutation and Shift] 56 

9.12 Vandermonde Matricesl 57 


10 Functions and Operators 58 

110.1 Functions and Seriesl 58 

10.2 Kronecker and Vec Operator] 59 

10.3 Vector Normsl 61 

10.4 Matrix Normsi 61 

10.5 Rankl 62 

10.6 Integral Involving Dirac Delta. Functions! 62 

10.7 Miscellaneous! 63 


IA One-dimensional Resultsl 64 

IA.1 Gaussianl 64 

IA.2 One Dimensional Mixture of Gaussiansl 65 


IB Proofs and Detailsl 66 

IB.l Misc Proofsl 66 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 4 


CONTENTS 


CONTENTS 


Notation and Nomenclature 


A 

A-ij 

A, 

A lj 

A n 

A~ l 

A+ 

A 1 / 2 
(A )ij 

Aij 

[A]*j 

a 

a; 

Cli 

a 


Matrix 

Matrix indexed for some purpose 

Matrix indexed for some purpose 

Matrix indexed for some purpose 

Matrix indexed for some purpose or 

The n.th power of a square matrix 

The inverse matrix of the matrix A 

The pseudo inverse matrix of the matrix A (see Sec. 3.6 ) 

The square root of a matrix (if unique), not elementwise 
The (i,j ). th entry of the matrix A 
The (i,j ). th entry of the matrix A 

The ij-submatrix, i.e. A with i.th row and j.th column deleted 
Vector (column-vector) 

Vector indexed for some purpose 
The i.th element of the vector a 
Scalar 


Real part of a scalar 
5Rz Real part of a vector 

5RZ Real part of a matrix 

Imaginary part of a scalar 
3z Imaginary part of a vector 

3Z Imaginary part of a matrix 


det(A) 
Tr(A) 
diag(A) 
eig(A) 
vec(A) 
sup 
1 1 A| | 
A t 
A~ t 
A* 
A h 


Determinant of A 
Trace of the matrix A 

Diagonal matrix of the matrix A, i.e. (diag(A)),j = SijAij 
Eigenvalues of the matrix A 


The vector-version of the matrix A (see Sec. 10.2.2) 

Supremum of a set 

Matrix norm (subscript if any denotes what norm) 

Transposed matrix 

The inverse of the transposed and vice versa, A~ T = (A~ X ) T = (A r )~ 
Complex conjugated matrix 

Transposed and complex conjugated matrix (Hermitian) 


A o B Hadamard (elementwise) product 

A ig) B Kronecker product 


0 The null matrix. Zero in all entries. 

1 The identity matrix 

J 1J The single-entry matrix, 1 at (i, j) and zero elsewhere 
X A positive definite matrix 

A A diagonal matrix 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 5 


1 BASICS 


1 Basics 


(AB)- 1 

= B7 J A 1 

(1) 

(ABC...)" 1 

= ...C _1 B _1 A _1 

(2) 

(A T )~ l 

= (A- 1 ) 7 

(3) 

(A + B) t 

= A t + B t 

(4) 

(AB) T 

= b t a t 

(5) 

(ABC...) t 

II 

b 

a 

s 

> 

(6) 

(A")" 1 

= (A-T 

(7) 

(A + B) h 

= a h + b h 

(8) 

(AB) h 

II 

Cfl 

> 

tn 

(9) 

(ABC...) h 

II 

b 

tX3 

a 

tn 

> 

(10) 


1.1 Trace 


Tr(A) 

= EiAi 

(11) 

Tr(A) 

= Aj = eig(A) 

(12) 

Tr(A) 

= Tr(A T ) 

(13) 

Tr(AB) 

= Tr(BA) 

(14) 

Tr(A + B) 

= Tr(A) + Tr(B) 

(15) 

Tr(ABC) 

= Tr(BCA) = Tr(CAB) 

(16) 

T 

a a 

= Tr(aa 7 ) 

(17) 


1.2 Determinant 

Let A be an n x n matrix. 


det(A) = 

A* = eig(A) 

(18) 

det(cA) = 

c n det(A), if A G K" xn 

(19) 

det(A 7 ) = 

det(A) 

(20) 

det(AB) = 

det(A) det(B) 

(21) 

det(A- : ) = 

1/ det(A) 

(22) 

det(A n ) = 

det(A)" 

(23) 

det(I + uv T ) = 

1 + u T v 

(24) 

For n = 2: 



det(I + A) 

= 1 + det(A) + Tr(A) 

(25) 

For n — 3: 



det(I + A) = 1 + det(A) + Tr(A) + ^Tr(A) 2 - ^Tr(A 2 ) 

(26) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 6 



1.3 The Special Case 2x2 


1 BASICS 


For n = 4: 

det(I + A) = 1 + det(A) + Tr(A) + ^ 

+Tr(A) 2 - ^Tr(A 2 ) 

+ ^Tr(A) 3 - *Tr(A)TV(A 2 ) + ^Tr(A 3 ) (27) 

o 2 3 

For small £, the following approximation holds 

det(I + eA) “ 1 + det(A) + eTr(A) + h 2 Tr(A) 2 - i £ 2 Tr(A 2 ) (28) 


1.3 The Special Case 2x2 

Consider the matrix A 

A = 


An A12 

A21 A22 


Determinant and trace 


Eigenvalues 


det(A) — A 11 A 22 — A 12 A 21 
Tr(A) = An + A 2 2 

A 2 - A • Tr(A) + det(A) = 0 


(29) 

(30) 


Tr(A) + ^/Tr(A) 2 — 4det(A) Tr(A) - ^/Tr(A) 2 - 4det(A) 

Ai — A 2 — 


2 2 
Ai -h A 2 = Tr(A) A 1 A 2 = det(A) 


Eigenvectors 


Inverse 


vi oc 


A 12 

Ai — An 


v 2 a 


A 12 

A 2 — An 


a- 3 = 


1 


det(A) 


A22 — A12 

— A 2 i An 


(31) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 7 



2 DERIVATIVES 


2 Derivatives 


This section is covering differentiation of a number of expressions with respect to 
a matrix X. Note that it is always assumed that X has no special structure , i.e. 
that the elements of X are independent (e.g. not symmetric, Toeplitz, positive 
definite). See section 2.8 for differentiation of structured matrices. The basic 
assumptions can be written in a formula as 


dX k i 

dX~j 


— 3ik&lj 


that is for e.g. vector forms, 


dx 

dxi 

dx 

dx 

dx 

dy 

i~ d v 

dy 

i ~ d y* 

[<9yJ 


dxj 

dyj 


(32) 


The following rules are general and very useful when deriving the differential of 
an expression am): 


dA 

d(aX) 
d(X + Y) 
5(Tr(X)) 
<9(XY) 
d(X o Y) 
d(X®Y) 
^(X- 1 ) 
<9(det(X)) 
<9(det(X)) 
9(ln(det(X))) 
dX T 


0 (A is a constant) (33) 

adX (34) 

dX + dY (35) 

Tr(<9X) (36) 

(3X)Y + X(5Y) (37) 

(dX) oY + Xo ( dY ) (38) 

(dX) ® Y + X ® (dY) (39) 

-X _1 (0X)X -1 (40) 

Tr(adj(X)9X) (41) 

det(X)Tr(X _1 dX) (42) 

Tr(X _1 aX) (43) 

(dX) T (44) 

(' dX) H (45) 


2.1 Derivatives of a Determinant 

2.1.1 General form 


ddet(Y) 

dx 


ST' 9det(X) 


d 2 det(Y) 
dx 2 


det(Y)Tr 



dY' 

dx 


Stj det(X) 

det(Y) 
+Tr \ 


Tr 


a2X 
y-l dx 

dx 


-1 


JY 

dx 


Tr 


-Tr 




dY' 

dx 

dY\ 
~fa ) 


(46) 

(47) 


(48) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 8 


2.2 Derivatives of an Inverse 


2 DERIVATIVES 


2.1.2 Linear forms 



ddet(X) 

dX 

= det(X)(X^ 1 ) T 

(49) 

<9det(X) 

X ax ik x ‘ k 

k 

= Sij det(X) 

(50) 

9det(AXB) 

dX 

= det(AXB)(X- 1 ) T = det(AXB)(X T )- 1 

(51) 

2.1.3 Square forms 




If X is square and invertible, then 

3d et(XjAX) =2det(x , AX)x - T (52) 

O.&. 

If X is not square but A is symmetric, then 

ddetCX^AX) = 2det(X T AX)AX(X T AX)- 1 (53) 

If X is not square and A is not symmetric, then 

gdctlX^AX) = det ( X T Ax ^ AX( - x T AX) -i + A t X(X t A t X)- 1 ) (54) 


2.1.4 Other nonlinear forms 

Some special cases are (See |9j|7]) 

91ndet(X T X)| 

dX 

9 In det(X T X) 
dX+ 

d In | det(X)| 

dX 

9det(X fc ) 

9X 


2(X + ) t 

-2X t 


(x-y = (x T y 

k det(X k )X~ T 


2.2 Derivatives of an Inverse 

From m we have the basic identity 

fly" 1 , y -i3Y y -i 

dx dx 


(55) 

(56) 

(57) 

(58) 


(59) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 9 


2.3 Derivatives of Eigenvalues 


2 DERIVATIVES 


from which it follows 


dXij 

da T X^ 1 h 

dX 

9det(X- 1 ) 

<9X 

<9Tr(AX _1 B) 

dX 

<9Tr((X + A)' 1 ) 
dX 


(X- 1 ) fei (X- 1 ) ji 

(60) 

X- T ab T X~ T 

(61) 

det(X“ 1 )(X~ 1 ) T 

(62) 

(X^BAX^f 

(63) 

((X + A)- 1 (X + A)“ 1 ) t 

(64) 


From [32] we have the following result: Let A be an n x n invertible square 
matrix, W be the inverse of A, and J( A) is an n x n -variate and differentiable 
function with respect to A, then the partial differentials of J with respect to A 
and W satisfy 

dJ _ T dJ _ T 

dA dW 


2.3 Derivatives of Eigenvalues 


A^eig(X) = ^Tr(X) = I (65) 

^n^ X ) = M det(X) = det ^ X ) X ~ T ( 66 ) 


If A is real and symmetric, A j and v, are distinct eigenvalues and eigenvectors 
of A (see ( |276 )) with vj v, = 1, then [35] 


d\ = vf9(A) Vl (67) 

dvi = (AJ - A) + 5(A)vi (68) 


2.4 Derivatives of Matrices, Vectors and Scalar Forms 

2.4.1 First Order 


<9x T a 

da T x 

dx 

<9x 

<9a T Xb 

= ab T 

dX 

<9a T X T b 

= ba T 

dX 

9a T Xa 

da T X T a 

dX 

dX 

II 

Q; 

dX tJ 


d(XA)ij 

dXmn 


d(X T A) ij 


dXmn 


a 

(69) 


(70) 


(71) 

= aa T 

(72) 


(73) 

= (J m "A)y 

(74) 

= (J" m A)y 

(75) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 10 


2 A Derivatives of Matrices, Vectors and Scalar Forms 2 DERIVATIVES 


2.4.2 Second Order 


8 


dX, 


Y^X kl X mn = 2]T X k i 


klmn 

db T X T Xc 

<9X 

S(Bx + b) T C(Dx + d) 


kl 


dx 

d(X T BX) 


kl 


dX l3 

d(X T BX) 

dX in 


= X(bc T + cb 7 


= B 7 C(Dx + d) + D 7 C 7 (Bx + b) 
= 5y(X T B) fci + 5 fcj -(BX) i « 


(76) 


(77) 

(78) 


(79) 


= X 7 BJ' f .1 BX (J ij )ki=S ik 6 jl (80) 


See Sec 9.7 for useful properties of the Single-entry matrix 3 lJ 


dx T Bx 

dx 

(B + B t )x 

(81) 

db T X T DXc 

dX 

D T Xbc T + DXcb T 

(82) 

^(Xb + c) T D(Xb + c) = 

(D + D T )(Xb + c)b T 

(83) 

Assume W is symmetric, then 



(x — As) t W (x — As) 

= — 2A T W(x - As) 

(84) 

J^(x-s) T W(x-s) 

= 2W(x - s) 

(85) 

J^(x-s) T W(x-s) 

II 

to 

$ 

X 

IE 

(86) 

-^-(x — As) t W(x — As) 
ax 

= 2W(x - As) 

(87) 

t^-(x — As) t W(x — As) 

= -2W(x - As)s t 

(88) 

As a case with complex values the following holds 


d(a — x H b) 2 
dx 

-2b(a — x H h)* 

(89) 

This formula is also known from the LMS algorithm [Ml 


2.4.3 Higher-order and non-linear 



d{X-) kl ^ i3 : 

9X„ = A (X J X >“ 

(90) 

For proof of the above, see|B.1.3| 



f) n—1 

— a T X"b = ^(X^) 

T ab T (X n ~ 1 ~ r ) T 

(91) 


r — 0 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 11 


2.5 Derivatives of Traces 


2 DERIVATIVES 


d 

ax 


n— 1 


a T (X") T X"b = [x n- 1 - r ab T (X n ) T X r 

+ (X r ) T X"ab T (X n " 1 - r ) T 


r—0 


(92) 


See |B.1.3| for a proof. 

Assume s and r are functions of x, i.e. s = s(x),r = r(x), and that A is a 
constant, then 


i-s r Ar = 

ax 

8 (Ax) t (Ax) 

£hc (Bx) T (Bx) 


rasi 

T 

’ dr' 


Ar + 

dx 

dx 


A t s 


d x 7 A 7 Ax 
5^ x T B T Bx 

A r Ax x t A t AxB 3 Bx 


= 9 2 - 

x T BBx 


x T B T Bx) 


(93) 

(94) 

(95) 


2.4.4 Gradient and Hessian 

Using the above we have for the gradient and the Hessian 


= dl 

dx 

a 2 / 


x T Ax + b T x 

(96) 

(A + A t )x + b 

(97) 

A + A t 

(98) 


dxdx T 

2.5 Derivatives of Traces 

Assume -F(X) to be a differentiable function of each of the elements of A. It 
then holds that 

dTr (F(X)) _ T 

ax ’ 

where /(•) is the scalar derivative of F(-). 

2.5.1 First Order 



= I 

(99) 

ik T ' (xA > 

= A t 

(100) 

At.iaxb, 

= a t b t 

(101) 

A.imaVb, 

= BA 

(102) 

AiKVa) 

= A 

(103) 

AikaV, 

= A 

(104) 

J^Tr(A®X) 

= Tr(A)I 

(105) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 12 


2.5 Derivatives of Traces 


2 DERIVATIVES 


2.5.2 Second Order 


d 

dX 


M^ X1) 


d_ 

dX 


Tr(X 2 B) 


d 

dX 

d 

dX 

d 

dX 

d 

dX 

d 

dX 

d 

dX 

d 


Tr(X T BX) 

Tr(BXX T ) 

Tr(XX T B) 

Tr(XBX T ) 

Tr(BX T X) 

Tr(X T XB) 


dX 


Tr(AXBX) 


d 

dX 


Tr(X T X) 


dX 


Tr 


Tr(B J X 1 CXB) 
^Tr [X r BXC] 
J^Tr(AXBX T C) 

(AXB + C)(AXB + C) r 

d 


dX 


Tr(X <g> X) 


See [7]. 


2.5.3 Higher Order 


d 

ax 




d 


Tr(AX fc ) 


dX 

Tr [B t X t CXX t CXB] 


2X t 

(106) 

(XB + BX) r 

(107) 

bx + b t x 

(108) 

bx + b t x 

(109) 

bx + b t x 

(110) 

XB t + XB 

(111) 

XB t + XB 

(112) 

XB t + XB 

(113) 

a t x t b t + b t x t a t 

(114) 

Att(XX t ) = 2X 

(115) 

c t xbb t + cxbb t 

(116) 

BXC + B t XC t 

(117) 

A t C t XB t + CAXB 

(118) 

2A t (AXB + C)B t 

(119) 

^Tr(X)Tr(X) = 2Tr(X)I(120) 


fc(X fc " 1 ) T (121) 

k - 1 

^(X r AX l:_r_1 ) T (122) 

r— 0 

cxx t cxbb t 

+c t xbb t x t c t x 

+cxbb t x t cx 

+C t XX t C t XBB t (123) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 13 


2.6 Derivatives of vector norms 


2 DERIVATIVES 


2.5.4 Other 

-^-Tr(AX -1 B) = — (X -1 BAX -1 ) t = -X^ T A T B T X" r (124) 

dX 

Assume B and C to be symmetric, then 


_d_ 

dX 


dX 


:Tr 


(X t CX)' 1 A 


Tr 


(. X 1 CX) _1 (X J BX) 


_d_ 

dX 


Tr 


(A + X 1 CX) _1 (X J BX) 


-(CX(X t CX)" 1 )(A + A t )(X t CX)” 1 (125) 

-2CX(X t CX)" 1 X t BX(X t CX)- 1 

+2BX(X t CX) _1 (126) 

— 2CX(A + X t CX)" 1 X t BX(A + X T CX)" 1 
+2BX(A + X t CX)" 1 (127) 


See [7]. 


<9Tr(sin(X)) 

dX 


= cos(X) 5 


2.6 Derivatives of vector norms 

2.6.1 Two-norm 

d .. .. i 


vr~ x ~ a 2 — 71 jT“ 

5x | |x — a 2 


d 


x — a 


(x — a)(x — a) 1 


dx llx - a| 


2 ll x — a|| 2 

2 


x - a 


x 


T 1 1 

x x lb 


= 2x 


dx dx 

2.7 Derivatives of matrix norms 

For more on matrix norms, see Sec. m 

2.7.1 Frobenius norm 

^l|X||| = ^Tt(XX-)=2X 


(128) 


(129) 

(130) 

(131) 


(132) 


See (248). Note that this is also a special case of the result in equation 119 


2.8 Derivatives of Structured Matrices 

Assume that the matrix A has some structure, i.e. symmetric, toeplitz, etc. 
In that case the derivatives of the previous section does not apply in general. 
Instead, consider the following general rule for differentiating a scalar function 

/(A) 

nr at a a. . fron T OA 

(133) 


df df dAki ^ 

\df 1 

T dA ' 

dAij dAki dAij 

dA 

dA^ 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 14 


2.8 Derivatives of Structured Matrices 


2 DERIVATIVES 


The matrix differentiated with respect to itself is in this document referred to 
as the structure matrix of A and is defined simply by 


dA 

dA i:j 


= S ij 


(134) 


If A has no special structure we have simply S ?J = J* J , that is, the structure 
matrix is simply the single-entry matrix. Many structures have a representation 
in singleentry matrices, see Sec. [97776] for more examples of structure matrices. 


2.8.1 The Chain Rule 


Sometimes the objective is to find the derivative of a matrix which is a function 
of another matrix. Let U = /(X), the goal is to find the derivative of the 
function g(U) with respect to X: 


%(U) dg(f(X)) 

OX dX 

Then the Chain Rule can then be written the following way: 


(135) 


0g(U) ... 9g( U) _ yy dg( U) du kl 
dX dxij dwfcz dxij 


Using matrix notation, this can be written as: 

dg(U) r <9g(U) r ffU 1 

dX i:j Id au ’ dXij.' 


(136) 


(137) 


2.8.2 Symmetric 

If A is symmetric, then S lJ = J' lJ + J J * — Jdjd and therefore 

■ 1 t 


0 lf_ 

dA 


df_ 

dA 


+ 


dl 

dA 


— diag 


K 

dA 


(138) 


That is, e.g., (jS]): 


9Tr(AX) 

ax 

5det(X) 

ax 

aindet(X) 

ax 


2.8.3 Diagonal 

If X is diagonal, then (dl): 


= A + A T - (Aol), see (1421 

(139) 

= det(X)(2X -1 — (X -1 0 I)) 

(140) 

= 2X -1 — (X^ 1 0 I) 

(141) 

= A.I 

aX 

(142) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 15 


2.8 Derivatives of Structured Matrices 


2 DERIVATIVES 


2.8.4 Toeplitz 

Like symmetric matrices and diagonal matrices also Toeplitz matrices has a 
special structure which should be taken into account when the derivative with 
respect to a matrix with Toeplitz structure. 

<9Tr(AT) 

dr 

<9Tr(TA) 

OT 

Tr(A) Tr([A T '] 1 

Tr([A T ] lr ,)) Tr(A) 

Tr([[A T ] lll ] 2i „_ 1 ) 

Ain 

= a( A) 

As it can be seen, the derivative a (A) also has a Toeplitz structure. Each value 
in the diagonal is the sum of all the diagonal valued in A, the values in the 
diagonals next to the main diagonal equal the sum of the diagonal next to the 
main diagonal in A 7 . This result is only valid for the unconstrained Toeplitz 
matrix. If the Toeplitz matrix also is symmetric, the same derivative yields 

— — =a(A) + a(A) r -a(A)oI (144) 




Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 16 



3 INVERSES 


3 Inverses 

3.1 Basic 

3.1.1 Definition 

The inverse A -1 of a matrix A £ C" xn is defined such that 

AA -1 = A _1 A = I, (145) 

where I is the n x n identity matrix. If A -1 exists, A is said to be nonsingular. 
Otherwise, A is said to be singular (see e.g. HU). 


3.1.2 Cofactors and Adjoint 

The submatrix of a matrix A, denoted by [A]jj is a (n — 1) x (n — 1) matrix 
obtained by deleting the ith row and the jth column of A. The (i,j) cofactor 
of a matrix is defined as 


cof(A, i,j) = (-l) 1-1- - 7 det([A]y), 

The matrix of cofactors can be created from the cofactors 

cof(A, 1, 1) ••• cof(A, l,n) 


cof(A) = 


cof(A, i,j) 


cof(A,n, 1) ••• cof(A,n, n) 

The adjoint matrix is the transpose of the cofactor matrix 

adj(A) = (cof(A)) T , 


(146) 


(147) 


(148) 


3.1.3 Determinant 

The determinant of a matrix A £ C nx " i s defined as (see [12]) 

n 

det(A) = ^^(— l)- J+1 Aij det ([A]ij) (149) 

i= i 

n 

= ^Aycof(A,l,j). (150) 

i= i 


3.1.4 Construction 

The inverse matrix can be constructed, using the adjoint matrix, by 


A - 1 


1 

det(A) 


• adj(A) 


For the case of 2 x 2 matrices, see section [O] 


(151) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 17 


3.2 Exact Relations 


3 INVERSES 


3.1.5 Condition number 


The condition number of a matrix c( A) is the ratio between the largest and the 
smallest singular value of a matrix (see Section 5.3 on singular values), 


C (A) = 


d+ 

d- 


(152) 


The condition number can be used to measure how singular a matrix is. If the 
condition number is large, it indicates that the matrix is nearly singular. The 
condition number can also be estimated from the matrix norms. Here 


C (A) = || A|| • || A -1 1| , 


(153) 


where || • || is a norm such as e.g the 1-norm, the 2-norm, the oo-norm or the 
Frobenius norm (see Sec 10.4 for more on matrix norms). 

The 2-norm of A equals \J (max(eig(A ff A))) [121 P-57]. For a symmetric 
matrix, this reduces to ||A|| 2 = max(|eig(A)|) [T2] p.394]. If the matrix is 
symmetric and positive definite, 1 1 A 1 1 2 = max(eig(A)). The condition number 
based on the 2-norm thus reduces to 


| 2 = max(eig(A)) max(eig(A )) = 


max(eig(A)) 

min(eig(A)) 


(154) 


3.2 Exact Relations 

3.2.1 Basic 

(AB) -1 = B -1 A -1 (155) 

3.2.2 The Woodbury identity 

The Woodbury identity comes in many variants. The latter of the two can be 
found in [12 

(A + CBC t )- 1 = A -1 — A~ 1 C(B _1 + C t A _1 C) _1 C t A^ 1 (156) 
(A + UBV)" 1 = A -1 — A~ 1 U(B _1 + VA _1 U) _1 VA _1 (157) 

If P,R are positive definite, then (see [30] ) 

(p- 1 + B t R- 1 B)" 1 B t R^ 1 = PB t (BPB t + R)- 1 (158) 

3.2.3 The Kailath Variant 

(A + BC)" 1 = A" 1 - A _1 B(I + CA _1 B) _1 CA _1 (159) 

See [U page 153]. 

3.2.4 Sherman-Morrison 

A -lhr T A ^ 1 

(A + bcT )- = A-- 1 + eTA _ lb (160) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 18 


3.2 Exact Relations 


3 INVERSES 


3.2.5 The Searle Set of Identities 


The following set of identities, 
(1 + A’ 1 )" 1 

can be found in [25 , page 151], 
= A(A + I)- 1 

(161) 

(A + BB t )~ 1 B 

= a- 1 b(i + b t a- 1 b)- 1 

(162) 

(A' 1 + B- 1 )" 1 

= A(A + B)" 1 B = B(A + B)" 1 A 

(163) 

A — A(A + B) _1 A 

= B-B(A + B)' 1 B 

(164) 

A' 1 + B' 1 

= A _1 (A + B)B^ 1 

(165) 

(I + AB)- 1 

= I — A(I + BA) _1 B 

(166) 

(1 + AB)- X A 

= A(I-lBA)' 1 

(167) 


3.2.6 Rank-1 update of inverse of inner product 

Denote A = (X T X) _1 and that X is extended to include a new column vector 
in the end X = [X v]. Then [34] 

— AX t v 

l 

v T v— v T XAX T v 

3.2.7 Rank-1 update of Moore-Penrose Inverse 

The following is a rank-1 update for the Moore-Penrose pseudo- inverse of real 
valued matrices and proof can be found in [IF . The matrix G is defined below: 

(A + cd T ) + = A + + G (168) 

Using the the notation 


P = 

1 + d 7 A + c 

(169) 

V = 

A+c 

(170) 

n = 

(A+) T d 

(171) 

w = 

(I — AA + )c 

(172) 

m = 

(I- A+A) T d 

(173) 


the solution is given as six different cases, depending on the entities ||w||, 
1 1 m 1 1 , and j3. Please note, that for any (column) vector v it holds that v + = 
v T (v T v) -1 = Tj^jp. The solution is: 


(± T ±y 


AX J 


XA J 


v T v— v T XAX T v 
-v t XA t 
r v— v t XAX t v 


Case 1 of 6: If ||w|| ^ 0 and ||m|| ^ 0. Then 

G = — vw + — (m + ) T n T + /3(m + ) T w + 


1 


T 

VW — 


1 


r mn T + 


P 


Il w lr ||m| 

Case 2 of 6: If ||w|| =0 and ||m|| ^ 0 and P = 0. Then 
G = — vv + A + — (m + ) T n T 


1 


rVV T A+ - 


(174) 

(175) 


(176) 

(177) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 19 


3.3 Implication on Inverses 


3 INVERSES 


Case 3 of 6: If ||w|| =0 and 0 ^ 0. Then 


1 


G = — mv T A + — — 7- 

0 ||v||' 2 ||m 


P 


-m + v 


m 


P J \ P 

Case 4 of 6: If ||w|| ^ 0 and ||m|| = 0 and 0 = 0. Then 
G = A + nn + — vw + 

L+nrW ^ 


-(A+) 


+ \T 


v + n 


A + nn r vw' 

lull 2 llwll 2 


Case 5 of 6: If ||m|| = 0 and 0 ^ 0. Then 


1 


G = ft A+nwT - || 1 1 2 1 1 A | 

0 ||n|| 2 ||w| 


A + n 


M 

P 7 V P 

Case 6 of 6: If ||w|| = 0 and ||m|| = 0 and 0 = 0. Then 

G = — vv + A + — A + nn + + v + A + nvn + 


-w + n 


— 7777 VV^ A + — 77-^777 A + nn T 


v t A+i 


(178) 

(179) 

(180) 


(181) 

(182) 

(183) 


3.3 Implication on Inverses 

If (A + B) -1 = A -1 + B 1 then AB -1 A = BA _1 B (184) 
See [251 . 


3.3.1 A PosDef identity 

Assume P,R to be positive definite and invertible, then 

(p- 1 + B t R- 1 B)" 1 B t R^ 1 = PB t (BPB t + R)- 1 (185) 

See (30]. 


3.4 Approximations 

The following identity is known as the Neuman series of a matrix, which holds 
when | Ai| < 1 for all eigenvalues \ 


(I - A)" 1 = ^ A" (186) 

n—0 


which is equivalent to 

OO 

(I + A)" 1 = ^(-l) n A” (187) 

n = 0 

When | Ai| < 1 for all eigenvalues A i, it holds that A — > 0 for n — > oo, and the 
following approximations holds 

(I -A)" 1 “ I + A + A 2 (188) 

(I + A)- 1 “ I — A + A 2 (189) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 20 


3.5 Generalized Inverse 


3 INVERSES 


The following approximation is from S2] and holds when A large and symmetric 


A — A(I + A) -1 A = I — A' 1 (190) 

If a 2 is small compared to Q and M then 

(Q + CT 2 M) -1 ^ Q _1 - (7 2 Q -1 MQ -1 (191) 

Proof: 

(Q + ct 2 M) -1 = (192) 

(QQ -1 Q + ct 2 MQ _1 Q) _1 = (193) 

((I + ct 2 MQ~ 1 )Q) _1 = (194) 

Q _1 (I + cr 2 MQ _1 ) _1 (195) 

This can be rewritten using the Taylor expansion: 

CT^I + c^MCT 1 )- 1 = (196) 


Q _1 (I — ct 2 MQ _1 + (ct 2 MQ -1 ) 2 — ...) “ Q -1 - cr 2 Q _1 MQ _1 (197) 

3.5 Generalized Inverse 

3.5.1 Definition 

A generalized inverse matrix of the matrix A is any matrix A 

m) 

AAA = A 

The matrix A~ is not unique. 

3.6 Pseudo Inverse 

3.6.1 Definition 

The pseudo inverse (or Moore-Penrose inverse) of a matrix A is the matrix A + 
that fulfils 


such that (see 
(198) 


I AA+A = A 

II A+AA+ = A+ 

III AA + symmetric 

IV A + A symmetric 

The matrix A + is unique and does always exist. Note that in case of com- 
plex matrices, the symmetric condition is substituted by a condition of being 
Hermitian. 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 21 


3.6 Pseudo Inverse 


3 INVERSES 


3.6.2 Properties 

Assume A + to be the pseudo-inverse of A, then (See [3] for some of them) 


(A+)+ 

= 

A 

(199) 

(A T ) + 

= 

(A + ) T 

(200) 

(A H ) + 

= 

(A+) h 

(201) 

(A*) + 

= 

{A+y 

(202) 

(A+A)A H 

= 

a h 

(203) 

(A+A)A t 


a t 

(204) 

(cA)+ 

= 

(l/c)A+ 

(205) 

A+ 

= 

(A t A)+A t 

(206) 

A+ 

= 

A t (AA t )+ 

(207) 

(A t A)+ 

= 

A + (A t ) + 

(208) 

(AA t )+ 

= 

(A t )+A+ 

(209) 

A+ 

= 

(A h A) + A h 

(210) 

A+ 

= 

A h (AA h ) + 

(211) 

{A h A) + 

= 

A + {A h ) + 

(212) 

(AA h ) + 

= 

(A h ) + A + 

(213) 

(AB)+ 

= 

(A + AB)+(ABB + )+ 

(214) 

I— i 

o' 

< 

< 

= 

A + [/(AA fl )^/(0)I]A 

(215) 

1— 1 
o' 

T 

< 

< 

= 

A[/(A ff A)-/(0)I]A+ 

(216) 


where A £ C nxm . 

Assume A to have full rank, then 

(AA+)(AA+) = 

(A+A)(A+A) = 

Tr(AA+) = 

Tr(A+A) = 

For two matrices it hold that 

(AB)+ = 

(A®B) + = 

3.6.3 Construction 

Assume that A has full rank, then 

A n x n Square rank(A) = n => A + = A -1 

A n x m Broad rank(A) = n => A + = A r (AA J ) _1 

A n x to Tall rank(A) = m => A + = (A 1 A) _1 A i 

The so-called ’’broad version” is also known as right inverse and the ’’tall ver- 
sion” as the left inverse. 


AA+ 


(217) 

A+A 


(218) 

rank(AA + ) 

(See [26] ) 

(219) 

rank(A + A) 

(See [ 26 ] ) 

(220) 


(A + AB) + (ABB+) + 

(221) 

A+ ® B + 

(222) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 22 


3.6 Pseudo Inverse 


3 INVERSES 


Assume A does not have full rank, i.e. A is n x m and rank(A) = r < 
min(n, m). The pseudo inverse A + can be constructed from the singular value 
decomposition A = UDV T , by 

A+ = (223) 

where U r , D r , and V r are the matrices with the degenerated rows and columns 
deleted. A different way is this: There do always exist two matrices C n x r 
and D r x m of rank r, such that A = CD. Using these matrices it holds that 

A+ = D t (DD t )- 1 (C t C)" 1 C t (224) 


See [3]. 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 23 


4 COMPLEX MATRICES 


4 Complex Matrices 

The complex scalar product r = pq can be written as 


' Rr ' 


Rp 

— Sp 


' Rq ' 

Sr 


Sp 

Rp 




(225) 


4.1 Complex Derivatives 


In order to differentiate an expression f(z) with respect to 
Cauchy-Riemann equations have to be satisfied ([7]): 

a complex z, the 

dm mm) .mm) 

dz dRz dRz 

(226) 

and 

df(z) ,mm) . mm) 

dz dSz dSz 

or in a more compact form: 

(227) 

df(z) ■ df (z) 

dQz 1 dRz ' 

(228) 

A complex function that satisfies the Cauchy-Riemann equations for points in a 
region R is said yo be analytic in this region R. In general, expressions involving 
complex conjugate or conjugate transpose do not satisfy the Cauchy-Riemann 
equations. In order to avoid this problem, a more generalized definition of 
complex derivative is used ([23], [5]): 

• Generalized Complex Derivative: 


df(z) If dm ■ df (z) \ 

dz 2 V dRz 1 dSz )' 

(229) 

• Conjugate Complex Derivative 


df{z) _ If dm ;df(z)\ 
dz* 2 V dRz d%z )' 

(230) 


The Generalized Complex Derivative equals the normal derivative, when / is an 
analytic function. For a non-analytic function such as f(z) = z* , the derivative 
equals zero. The Conjugate Complex Derivative equals zero, when / is an 
analytic function. The Conjugate Complex Derivative has e.g been used by [21] 
when deriving a complex gradient. 

Notice: 


df{z) df{z) df(z) 
dz ^ dRz 1 d%z ' 


(231) 


• Complex Gradient Vector: If / is a real function of a complex vector z, 
then the complex gradient vector is given by (0 P- 798]) 


V/(Z) = (232) 
df(z) .df{ z) 
dRz 1 <9Sz 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 24 


4.1 Complex Derivatives 


4 COMPLEX MATRICES 


• Complex Gradient Matrix: If / is a real function of a complex matrix Z, 
then the complex gradient matrix is given by (El) 

v/(z) = {233) 

df( Z) ,df( Z) 

<95RZ * <99Z ' 


These expressions can be used for gradient descent algorithms. 


4.1.1 The Chain Rule for complex numbers 


The chain rule is a little more complicated when the function of a complex 
u = f(x ) is non-analytic. For a non-analytic function, the following chain rule 
can be applied ( 0 ) 

dg(u) dg du dg du* 

-§5T = (234) 

dg du / dg* \ * du* 
du dx V du ) dx 

Notice, if the function is analytic, the second term reduces to zero, and the func- 
tion is reduced to the normal well-known chain rule. For the matrix derivative 
of a scalar function g(U), the chain rule can be written the following way: 


dff(U) Tr((g§^) T dU) Tr((%gl) T ffU*) 
dX dX + dX 


(235) 


4.1.2 Complex Derivatives of Traces 


If the derivatives involve complex numbers, the conjugate transpose is often in- 
volved. The most useful way to show complex derivative is to show the derivative 
with respect to the real and the imaginary part separately. An easy example is: 


<9Tr(X*) <9Tr(X ff ) 

<99?X " <9KX 

. 9Tr(X*) ,<9Tr(X ff ) 
* OAX ~ 1 dQX 


(236) 

(237) 


Since the two results have the same sign, the conjugate complex derivative ([230 ) 
should be used. 


<9Tr(X) <9Tr(X T ) 
cWX ~ <9!KX 

,<9Tr(X) .3Tr(X T ) 
* OAX ~ 1 OAX 


(238) 

(239) 


Here, the two results have different signs, and the generalized complex derivative 
(2291 should be used. Hereby, it can be seen that (100 1 holds even if X is a 
complex number. 


dTr(AX H ) 

,dTr(AX H ) 
1 d%X 


A 

(240) 

A 

(241) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 25 


4.2 Higher order and non-linear derivatives 


4 COMPLEX MATRICES 


aTr(AX*) T 

aa?x 

(242) 

aTr(AX*) T 

' 33X = A 

(243) 

aTr(XX^) aTr(X^X) 

(244) 

aiRx asiix 

. aTr(XX fl ) ,aTr(X ff X) 

asx asx 

(245) 


By inserting (244) and (245) in (229) and (230), it can be seen that 

dTr(XX H ) 
dX 

dTr(XX H ) 


dX* 


= X* 

(246) 

- — X 

(247) 


Since the function Tr(XX ff ) is a real function of the complex matrix X, the 


complex gradient matrix ( 233 1 is given by 


VTr(XX ff ) = 2 


^Tr(XX") 

ax* 


= 2X 


(248) 


4.1.3 Complex Derivative Involving Determinants 


Here, a calculation example is provided. The objective is to find the derivative of 
det(X ff AX) with respect to X € C mxn . The derivative is found with respect to 
the real part and the imaginary part of X, by use of (42) and (37), det(X H AX) 


can be calculated as (see App. B.1.4 for details) 
<9det(X H AX) 


dX 


1 ( d det(X AX) . d det(X" AX) \ 

2 V *" 


aaex 


<9QX 

H 


= det{X H AX)({X H AX)~ 1 X n A i 

and the complex conjugate derivative yields 

adet(X^AX) l/ddet(X ff AX) ,adet(X ff AX) 

ax* _ 2 v a^x + * aox 

= det(X H AX)AX(X ff AX)- 1 

4.2 Higher order and non-linear derivatives 

a (Ax) ff (Ax) a x H A fl Ax 

ax (Bx) ff (Bx) " a^x H B ff Bx 

A h Ax x h A h AxB h Bx 

= 2 — T-, 2 - 


(249) 


(250) 


x«BBx 


(x ff B fl Bx) 2 


(251) 

(252) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 26 


4.3 Inverse of complex sum 


4 COMPLEX MATRICES 


4.3 Inverse of complex sum 

Given real matrices A,B find the inverse of the complex sum A + zB. Form 
the auxiliary matrices 

E = A + tB (253) 

F = B tA, (254) 

and find a value of t, such that E _1 exists. Then 

(A + zB) _1 = (1 — it)(E + zF) _1 (255) 

= (1 - it)(( E + FE _1 F) _1 - z(E + FE” 1 F) _1 FE~ 1 )(256) 
= (1 — zt)(E + FE _1 F)~ 1 (I — zFE -1 ) (257) 

= (E + FE~ 1 F)~ 1 ((I — tFE -1 ) — i(tl + FE -1 )) (258) 

= (E + FE -1 F) -1 (I-fFE -1 ) 

— z(E + FE -1 F) -1 (fI + FE -1 ) (259) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 27 



5 SOLUTIONS AND DECOMPOSITIONS 


5 Solutions and Decompositions 

5.1 Solutions to linear equations 

5.1.1 Simple Linear Regression 

Assume we have data ( x n: y n ) for n = 1 and are seeking the parameters 

a, b £ R. such that = aXi + b. With a least squares error function, the optimal 
values for a, b can be expressed using the notation 

x= (aq,...,:Cjv) T y = {y 1 ,...,y N ) T 1 = (1, ..., 1) T £ R Nxl 

and 

Rxx = x T x R xl = x T l Rn = 1 T 1 
Ryx = y T x Ryl = y T l 


as 


a 


Rxx 

Rxi 

-1 

Rx,y 

b 


Rxi 

Rn 


Ryl 


(260) 


5.1.2 Existence in Linear Systems 

Assume A is n x m and consider the linear system 

Ax = b 


Construct the augmented matrix B = [A b] then 


Condition 

rank(A) = rank(B) = m 
rank(A) = rank(B) < m 
rank(A) < rank(B) 


Solution 

Unique solution x 
Many solutions x 
No solutions x 


5.1.3 Standard Square 

Assume A is square and invertible, then 


Ax = b => x = A : b 


(261) 


(262) 


5.1.4 Degenerated Square 

Assume A is n x n but of rank r < n. In that case, the system Ax = b is solved 

by 

x = A+b 

where A + is the pseudo-inverse of the rank-deficient matrix, constructed as 
described in section 13.6.31 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 28 


5.1 Solutions to linear equa tions5 SOLUTIONS AND DECOMPOSITIONS 


5.1.5 Cramer’s rule 


The equation 


Ax = b, 


(263) 


where A is square has exactly one solution x if the ith element in x 
found as 

det B 
Xi = det A ’ 


can be 
(264) 


where B equals A, but the <th column in A has been substituted by b. 


5.1.6 Over-determined Rectangular 

Assume A to be n x m, n > m (tall) and rank(A) = m, then 

Ax = b x = (A T A) -1 A T b = A+b (265) 

that is if there exists a solution x at all! If there is no solution the following 
can be useful: 

Ax = b => x„„; ra = A+b (266) 

Now x m j„ is the vector x which minimizes ||Ax — b| | 2 , i.e. the vector which is 
’’least wrong”. The matrix A+ is the pseudo-inverse of A. See [5J. 

5.1.7 Under-determined Rectangular 

Assume A is n x m and n < m (’’broad”) and rank(A) = n. 

Ax = b => Xmin = A r (AA J ) _1 b (267) 

The equation have many solutions x. But x m j n is the solution which minimizes 
1 1 Ax — b 1 1 2 and also the solution with the smallest norm | |x| | 2 . The same holds 
for a matrix version: Assume A is n x m, X is m x n and B is n x n, then 

AX = B => X min = A+B (268) 

The equation have many solutions X. But X m ,„ is the solution which minimizes 
1 1 AX — B 1 1 2 and also the solution with the smallest norm ||X|| 2 . See [3]. 

Similar but different: Assume A is square nxn and the matrices Bq,Bi 
are n x A, where N > n, then if B 0 has maximal rank 

ABo = Bi => A m j„ = BiBq (BoBq ) 1 (269) 

where A. mirl denotes the matrix which is optimal in a least square sense. An 
interpretation is that A is the linear approximation which maps the columns 
vectors of Bq into the columns vectors of Bi. 


5.1.8 Linear form and zeros 

Ax = 0, Vx => A = 0 (270) 

5.1.9 Square form and zeros 

If A is symmetric, then 

x t Ax = 0, Vx => A = 0 (271) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 29 


5.2 Eigenvalues and Eigenvector § SOLUTIONS AND DECOMPOSITIONS 


5.1.10 The Lyapunov Equation 


AX + XB = C 

vec(X) = (I® A + B T (8>I)- 1 vec(C) 


(272) 

(273) 


Sec 10.2.1 and 10.2.2| for details on the Kronecker product and the vec op- 
erator. 


5.1.11 Encapsulating Sum 


£„ A„XB n = C 

vec(X) = (EX® a „) XectC) 


(274) 

(275) 


See Sec 10.2.1 and 10.2.2| for 
operator. 


details on the Kronecker product and the vec 


5.2 Eigenvalues and Eigenvectors 

5.2.1 Definition 

The eigenvectors v, : and eigenvalues \ are the ones satisfying 

Avj = A iVi (276) 


5.2.2 Decompositions 

For matrices A with as many distinct eigenvalues as dimensions, the following 
holds, where the columns of V are the eigenvectors and (D)y = 8 l:j A,; , 

AV = VD (277) 

For defective matrices A, which is matrices which has fewer distinct eigenvalues 
than dimensions, the following decomposition called Jordan canonical form , 
holds 

AV = VJ (278) 

where J is a block diagonal matrix with the blocks Jj = Ajl + N. The matrices 
J i have dimensionality as the number of identical eigenvalues equal to Ai, and N 
is square matrix of same size with 1 on the super diagonal and zero elsewhere. 
It also holds that for all matrices A there exists matrices V and R such that 

AV = VR (279) 

where R is upper triangular with the eigenvalues A; on its diagonal. 

5.2.3 General Properties 

Assume that A £ R” xm anc l B £ R mx ", 

eig(AB) = eig(BA) (280) 

rank(A) = r => At most r non-zero Aj (281) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 30 


5.3 Singular Value Decomposition SOLUTIONS AND DECOMPOSITIONS 


5.2.4 Symmetric 

Assume A is symmetric, then 


VV T = I (i.e. V is orthogonal) 

(282) 

Xi € R (i.e. A i is real) 

(283) 

Tr(A p ) = E.A? 

(284) 

eig(I + cA) = 1 + cAi 

(285) 

eig(A - cl) = Xi - c 

(286) 

eig(A- 1 ) = A" 1 

(287) 


For a symmetric, positive matrix A, 

eig(A i A) = eig(AA r ) = eig(A) o eig(A) (288) 

5.2.5 Characteristic polynomial 

The characteristic polynomial for the matrix A is 

0 = det(A - AI) (289) 

= X n -g 1 X n - 1 +g 2 X n - 2 -... + (-l) n g n (290) 

Note that the coefficients gj for j = 1, ...,n are the n invariants under rotation 
of A. Thus, gj is the sum of the determinants of all the sub-matrices of A taken 
j rows and columns at a time. That is, g\ is the trace of A, and g 2 is the sum 
of the determinants of the n(n — l)/2 sub-matrices that can be formed from A 
by deleting all but two rows and columns, and so on - see PH- 


5.3 Singular Value Decomposition 

Any n x m matrix A can be written as 


where 

U = 
D = 
V = 


A = UDV t , 

eigenvectors of AA 1 n x n 

\J diag(eig(AA T )) n x m 

eigenvectors of A 1 A m x m 


(291) 

(292) 


5.3.1 Symmetric Square decomposed into squares 

Assume A to be n x n and symmetric. Then 

[ A ] = [ V ] [ D ] [ V T ] , (293) 

where D is diagonal with the eigenvalues of A, and V is orthogonal and the 
eigenvectors of A. 


5.3.2 Square decomposed into squares 

Assume A € R nx ". Then 

[ A ] = [ V ] [ D ] [ U T ] , (294) 

where D is diagonal with the square root of the eigenvalues of AA 1 , V is the 
eigenvectors of AA 1 and U T is the eigenvectors of A 1 A. 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 31 


5.4 Triangular Decomposition 5 SOLUTIONS AND DECOMPOSITIONS 


5.3.3 Square decomposed into rectangular 

Assume V*D*Uj = 0 then we can expand the SVD of A into 


[ A ] = [ V | V* 
where the SVD of A is A = VDU T 



(295) 


5.3.4 Rectangular decomposition I 

Assume A is n x m, V is n x n, D is n x n, U T is n x m 

[ A ] = [ V ] [ D ] [ U T ] , (296) 

where D is diagonal with the square root of the eigenvalues of AA 3 , V is the 
eigenvectors of AA 3 and U T is the eigenvectors of A 3 A. 


5.3.5 Rectangular decomposition II 

Assume A is n x m, V is n x m, D is m x m, U T is to x m 



5.3.6 Rectangular decomposition III 

Assume A is n x m, V is n x n, D is n x m, U T is to x m 

[ A ] = [ V ] [ D ] U T , (298) 

where D is diagonal with the square root of the eigenvalues of AA 3 , V is the 

eigenvectors of AA 3 and U T is the eigenvectors of A 1 A. 

5.4 Triangular Decomposition 

5.5 LU decomposition 

Assume A is a square matrix with non-zero leading principal minors, then 

A = LU (299) 

where L is a unique unit lower triangular matrix and U is a unique upper 
triangular matrix. 

5.5.1 Cholesky-decomposition 

Assume A is a symmetric positive definite square matrix, then 

A = U T U = LL t , (300) 

where U is a unique upper triangular matrix and L is a lower triangular matrix. 

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 32 



5.6 LDM decomposition 


5 SOLUTIONS AND DECOMPOSITIONS 


5.6 LDM decomposition 

Assume A is a square matrix with non-zero leading principal minor^J then 

A = LDM t (301) 

where L, M are unique unit lower triangular matrices and D is a unique diagonal 
matrix. 

5.7 LDL decompositions 

The LDL decomposition are special cases of the LDM decomposition. Assume 
A is a non-singular symmetric definite square matrix, then 

A = LDL t = L T DL (302) 

where L is a unit lower triangular matrix and D is a diagonal matrix. If A is 
also positive definite, then D has strictly positive diagonal entries. 

1 If the matrix that corresponds to a principal minor is a quadratic upper- left part of the 
larger matrix (i.e., it consists of matrix elements in rows and columns from 1 to k), then the 
principal minor is called a leading principal minor. For an n times n square matrix, there are 
n leading principal minors. m 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 33 


6 STATISTICS AND PROBABILITY 


6 Statistics and Probability 

6.1 Definition of Moments 

Assume x £ R” xl is a random variable 


6.1.1 Mean 

The vector of means, m, is defined by 

(m)j = (x i) 

6.1.2 Covariance 

The matrix of covariance M is defined by 

(M)*j = ((Xi - {■ Xi)){xj - {Xj))) 

or alternatively as 

M = ((x — m)(x — m) T ) 


(303) 


(304) 

(305) 


6.1.3 Third moments 

The matrix of third centralized moments - in some contexts referred to as 
coskewness - is defined using the notation 

m ijl = (( x i~ ( x i))( x i - ( Xj)){x k ~{x k ))) (306) 


as 


Mi = 


ri3W3) 


TO ::1 TO ::2 


( 3 ) 


(307) 


where denotes all elements within the given index. M 3 can alternatively be 
expressed as 

M 3 = ((x — m)(x — m) T g) (x — m) T ) (308) 


6.1.4 Fourth moments 

The matrix of fourth centralized moments - in some contexts referred to as 
cokurtosis - is defined using the notation 

m \fki = i( x i - ( Xi)){xj - (xj))(x k - {xk^ixt - {xi))) (309) 


as 


M 4 = 


(4) (4) (4) , (4) (4) (4) | , (4) (4) ^( 4 ) ' 


(310) 


or alternatively as 

M 4 = ((x — m)(x — m) T (g) (x — m) T (g) (x — m) T ) (311) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 34 



6.2 Expectation of Linear CombinatidSis STATISTICS AND PROBABILITY 


6.2 Expectation of Linear Combinations 
6.2.1 Linear Forms 

Assume X and x to be a matrix and a vector of random variables. Then (see 


See 2511 

E[ AXB + C] = A£[X]B + C (312) 

Var[Ax] = AVar[x]A 2 (313) 

Cov[Ax, By] = ACov[x, y]B T (314) 

Assume x to be a stochastic vector with mean m, then (see ED 

A[Ax + b] = Am + b (315) 

.EfAx] = Am (316) 

E[x. + b] = m + b (317) 


6.2.2 Quadratic Forms 

Assume A is symmetric, c = F[x] and X = Var[x]. Assume also that all 
coordinates Xi are independent, have the same central moments pi, /z 2 , P3, Pi 
and denote a = diag(A). Then (See [2B] ) 

E[x. t Ax] = Tr(AS)+c T Ac (318) 

Var[x 2 Ax] = 2^2Ti'(A 2 ) + 4/i 2 c 2 A 2 c + 4/i 3 c T Aa + (p± — Sp^a 1 a (319) 

Also, assume x to be a stochastic vector with mean m, and covariance M. Then 
(see 0) 


£[(Ax + a)(Bx + b) T ] 

= AMB t + (Am + a) (Bm + b) T 

(320) 

£[xx r ] 

= M + mm T 

(321) 

E[x a T x] 

= (M + mm T )a 

(322) 

£[x T ax 5 ] 

= a 2 (M + mm 2 ) 

(323) 

E[( Ax)(Ax) t ] 

= A(M + mm T )A 2 

(324) 

E[(x + a)(x + a) T ] 

= M + (m + a)(m + a) T 

(325) 

B[(Ax + af(Bx + b)] 

= Tr(AMB T ) + (Am + a) r (Bm + b) 

(326) 

E[x t x] 

= Tr(M) + m T m 

(327) 

E[x t Ax] 

= Tr(AM) + m 2 Am 

(328) 

£[(Ax) T (Ax)] 

= Tr(AMA r ) + (Am) T (Am) 

(329) 

2£[(x + a) T (x + a)] 

= Tr(M) + (m + a) T (m + a) 

(330) 


See [7j. 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 35 


6.3 Weighted Scalar Variable 


6 STATISTICS AND PROBABILITY 


6.2.3 Cubic Forms 


Assume x to be a stochastic vector with independent coordinates, mean m, 
covariance M and central moments V 3 = i?[(x — m) 3 ]. Then (see |7j ) 


-E[(Ax + a)(Bx + b) T (Cx + c)] 


fi[xx T x] 

E[{ Ax + a) (Ax + a) T (Ax + a)] 


£[(Ax + a)b T (Cx + c)(Dx + d) r ] 


Adiag(B 7 C)v 3 
+Tr(BMC T )(Am + a) 

+AMC T (Bm + b) 

+(AMB 2 + (Am + a)(Bm + b) T )(Cm + c) 

V 3 + 2Mm + (Tr(M) + m. 1 m)m 
Adiag(A 7 A)v 3 

+ [2AMA J + (Ax + a) (Ax + a) T ](Am + a) 
+Tr(AMA r )(Am + a) 

(Ax + a)b T (CMD T + (Cm + c)(Dm + d) T ) 
+(AMC t + (Am + a)(Cm + c) T )b(Dm + d) T 
+b T (Cm + c)(AMD t - (Am + a)(Dm + d) T ) 


6.3 Weighted Scalar Variable 

Assume x € R” xl is a random variable, w £ R raxl is a vector of constants and 
y is the linear combination y = w T x. Assume further that m, M 2 ,M 3 ,M 4 
denotes the mean, covariance, and central third and fourth moment matrix of 
the variable x. Then it holds that 


(■ y ) 

T 

= w m 

(331) 

(( y-(y » 2 > 

= w t M 2 w 

(332) 

((y-(y)) 3 ) 

= w T M 3 w (gi w 

(333) 

((y-(y)) 4 ) 

= w t M 4 W (g> w (g> w 

(334) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 36 


7 MULTIVARIATE DISTRIBUTIONS 


7 Multivariate Distributions 


7.1 

The 


Cauchy 

density function for a Cauchy distributed vector t £ 


p(t|/X, S) =7T P/2 


r(l±g) det(S)~ 1 /2 

r(l/2) [1 + (t - /z) T E _1 (t - fj,)] 


Pxl , is given by 

ATPy* ( 335 ) 


where /x is the location, S is positive definite, and T denotes the gamma func- 
tion. The Cauchy distribution is a special case of the Student-t distribution. 


7.2 Dirichlet 


The Dirichlet distribution is a kind of “inverse” distribution 
multinomial distribution on the bounded continuous variate 


m p- 44] 


p(x|a) 



compared to the 
x = [xi, ...,x P ] 


7.3 Normal 

The normal distribution is also known as a Gaussian distribution. See sec. |H] 


7.4 Normal-Inverse Gamma 

7.5 Gaussian 

See sec. [8] 


7.6 Multinomial 

If the vector n contains counts, i.e. (n)i £ 0, 1, 2, ..., then the discrete multino- 
mial disitrbution for n is given by 

I d d 

P(n|a,n) = — . ^ n % = n (336) 

77»i ! . . . Tld'. 

i i 

where are probabilities, i.e. 0 < < 1 and = 1. 


7.7 Student’s t 

The density of a Student-t distributed vector t £ R Pxl , is given by 

, . „ , . , P/2 n^f) det(S)- 1 / 2 

P( t|/x,S, l /) = ( 7 r^)- p / 2 2 V ; 


r(^/2) [! + v -i (t _ M )T S -! (t _ M )] ( " +p)/2 


(337) 


where fi is the location, the scale matrix £ is symmetric, positive definite, v 
is the degrees of freedom, and T denotes the gamma function. For v = 1, the 
Student-t distribution becomes the Cauchy distribution (see sec 7.1). 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 37 


7.8 Wishart 


7 MULTIVARIATE DISTRIBUTIONS 


7.7.1 Mean 


E(t) = n, v > 1 


7.7.2 Variance 

cov(t) = — — — 53, v > 2 

v — 2 


(338) 


(339) 


7.7.3 Mode 

The notion mode meaning the position of the most probable value 

mode(t) = p (340) 


7.7.4 Full Matrix Version 

If instead of a vector t £ R Pxl one has a matrix T £ R PxJV , then the Student-t 
distribution for T is 


P( T|M,ft,53,p) 


— NP/2 TT r [(u + P - p + l)/2] 

n T[(u-p+l)/2] 

v det(r2) _y / 2 det(S)^ JV / 2 x 

det [fr 1 + (T - M)S^ 1 (T - M) T ] _(i/+P)/2 (341) 


where M is the location, Cl is the rescaling matrix, S is positive definite, v is 
the degrees of freedom, and T denotes the gamma function. 


7.8 Wishart 

The central Wishart distribution for M £ R PxP , M is positive definite, where 
m can be regarded as a degree of freedom parameter pj] equation 3.8.1] [8] 
section 2. 5], [IT] 


p(M|E, m ) 


2 mP/2 7r P(P-l)/4 JJ P r[l ( m + 1 _ p)] 


det(S)- m / 2 det(M) 


(m-P- 1)/2 


exp 


— -Tr(S -1 M) 


(342) 


7.8.1 Mean 


E{M) = mS 


(343) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 38 


7. 9 Wishart, Inverse 


7 MULTIVARIATE DISTRIBUTIONS 


7.9 Wishart, Inverse 

The (normal) Inverse Wishart distribution for M £ R PxP , M is positive defi- 
nite, where m can be regarded as a degree of freedom parameter El 


p(M|E, m) 


1 

2 mP/2 7T P(P- 1)/4 JJ P r[l( m +l-p)] 

det(£) m / 2 det(M)- (m - p - 1 )/ 2 x 


exp 


-^Thsivr 1 ) 


(344) 


7.9.1 Mean 

£(M) = S l -— 

m — Jr — 1 


(345) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 39 


8 GAUSSIANS 


8 Gaussians 


8.1 Basics 

8.1.1 Density and normalization 

The density of x ~ 7V(m, E) is 


P(x) = 


1 


i/det(27 rE' 


: exp 




(346) 


Note that if x is d-dimensional, then det(27rE) = (2n) d det(E). 
Integration and normalization 


exp 
/ exp 


— -(x — m) T S 1 (x — m) 


--x T S- 1 x + m T S' 1 x 


exp 


— -x T Ax + c T x 


dx = \/det(27rS) 

dx = \/det(27rX) exp 

dx = \/det(27rA~ 1 ) exp 


-m T S _1 m 

2 


-c t A~ t c 

2 


If X = [xiX 2 ...x n ] and C = [ciC 2 ...c n ], then 


exp 


-^Tr(X T AX) + Tr(C T X) 


dX = \/det(27rA _1 ) exp 


Tr(C T A _1 C) 


The derivatives of the density are 

^ - -*■>* 

d 2 P 


<9x<9x T 


m) 

(347) 

m)(x-m) T S- a - E^ 1 ) 

(348) 


8.1.2 Marginal Distribution 

Assume x ~ J V x (/x, E) where 


x a 

Xfc 




Hb 


s = 


E h 


then 


p(x a ) = J V Xa (/r a ,E a ) 
P(xft) = Af^iUh^b) 


(349) 


(350) 

(351) 


8.1.3 Conditional Distribution 

Assume x ~ 7V x (/i, E) where 


x a 

x h 


H = 


Va 

Vb 


s = 


2 a s c 

2^ E h 


(352) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 40 



8. 1 Basics 


8 GAUSSIANS 


then 


p(x a |x & ) = A /x a (/< a ,S a ) 
p(x b |x a ) = A £ b ) 


A a = Ma + ^cSb^Xb-Mb) 

s Q = Ea-s^s^ 

Ab = Mb + S^a^Xa ~Ma) 

■'T v' - 1 ^ 


(353) 


(354) 


l E b = Sb-S^S-^Se 

Note, that the covariance matrices are the Schur complement of the block ma- 
trix, see 19. 1.51 for details. 


8.1.4 Linear combination 

Assume x ~ Af(m x , Sj) and y ~ A7(m y , E y ) then 

Ax + By + c ~ Af(Am x + Bm y + c, AE^, A T + BE y B 7 ) (355) 


8.1.5 Rearranging Means 

K r r™ vc _ v / det(27r(A T S _1 A) -1 ) 

A/Ax AW) ^ — . 

y/d et(>E) 

If A is square and invertible, it simplifies to 

1 


AA x [A- 1 m,(A T E- 1 A)- 1 ] (356) 


A/Ax[m, E] = 


A r x [A _1 m, (A T E _1 A' _l1 


det(A)| 


(357) 


8.1.6 Rearranging into squared form 

If A is symmetric, then 

— |x J Ax + b T x = — ^(x — A _1 b) T A(x — A _1 b) + ^b T A -1 b 

-^Tr(X r AX) +Tr(B T X) = -^Tr[(X - A _1 B) T A(X - A _1 B)] + iTr(B T A _1 B) 

8.1.7 Sum of two squared forms 

In vector formulation (assuming Si, S 2 are symmetric) 



-(x-mifS^Hx-mi) 

(358) 


2 ( x - m 2 ) T E^ 1 (x - m 2 ) 

(359) 


= -^(x- m c ) T E“ 1 (x- m c ) + C 

(360) 

s - 1 = sr 1 

+ ^2 l 

(361) 

m, = (Ej" 

1 + E 2 ' 1 ) _1 (E 1 ( 1 m 1 + E 2 _1 m 2 ) 

(362) 

C = ^(mfSr 1 + m^E 2 - 1 )(S^ 1 + E 2 - 1 )- 1 (Er 1 m 1 + E^ 

1 m 2 )(363) 

- 

- (mj E 1 1 m 1 + m .2 E 2 ’m 2 j 

(364) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 41 


8.2 Moments 


8 GAUSSIANS 


In a trace formulation (assuming Si, S 2 are symmetric) 

— ^Tr((X — M 1 ) t S)" 1 (X — Mi)) (365) 

— ^Tr((X — M 2 ) t S 2 _1 (X — M 2 )) (366) 

= ~ ^Tr[(X — M c ) t S“ 1 (X — M c )] + C (367) 

S- 1 = S^ + S^ 1 (368) 

M c = (S^ + S^J-^S^Mi + S^Ma) (369) 

C = iTr[(Sr 1 M 1 + S^ 1 M 2 ) T (Sr 1 + S^ 1 )- 1 (Sr 1 M 1 +S^ 1 M 2 ) 

-^(MfS^Mi +MfS^ 1 M 2 ) (370) 

8.1.8 Product of gaussian densities 

Let jV"x(m, S) denote a density of x, then 

7V x (mi, Si) • 7V x (m 2 , S 2 ) = c c A/' x (m c , S c ) (371) 


c c — A/’mj (m 2 , (Si + S 2 )) 

1 


y 1 det(27r(S 1 + S 2 )) L 2' 
m c = (X -^ 1 + S 2 *) ^(Xj 1 nii + S 2 ^m. 2 ) 
S c = (Sj) 1 + S^ 1 ) -1 


exp --(mi - m 2 ) T (Si + S 2 ) 1 (mi-m 2 ) 


but note that the product is not normalized as a density of x. 


8.2 Moments 

8.2.1 Mean and covariance of linear forms 

First and second moments. Assume x ~ Af( m, S) 

E{x) = m (372) 


Cov(x, x) = Var(x) = S = £(xx T ) - E(x)E{x T ) = £(xx T ) - mm T (373) 

As for any other distribution is holds for gaussians that 

£[Ax] = A£[x] (374) 

Var[Ax] = AVar[x]A T (375) 

Cov[Ax, By] = ACov[x, y]B T (376) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 42 



8.2 Moments 


8 GAUSSIANS 


8.2.2 Mean and variance of square forms 

Mean and variance of square forms: Assume x ~ Af( m, S) 


f;(xx t ) 

= S + mm T 

(377) 

E[x t Ax] 

= Tr(AS) + m 7 Am 

(378) 

Var(x T Ax) 

= Tr[AS(A + A T )S] + ... 



+m T (A + A t )S(A + A T )m 

(379) 

2?[(x — m') 7 A(x — m')] 

= (m — m') 7 A(m — m') + Tr(AS) 

(380) 

If X = a 2 I and A is symmetric, 

then 


Var(x T Ax) 

= 2cr 4 Tr(A 2 ) + 4cr 2 m 7 A 2 m 

(381) 

Assume x ~ Af( 0, u 2 I) and A and B to be symmetric, then 


Cov(x t Ax, x t Bx) = 2cr 4 Tr(AB) 

(382) 


8.2.3 Cubic forms 

Assume x to be a stochastic vector with independent coordinates, mean m and 
covariance M 

E[xb 7 xx 7 = mb T (M + mm 7 ) + (M + mm T )bm 7 

+b T m(M - mm T ) (383) 


8.2.4 Mean of Quartic Forms 


£[xx t xx t ] 


£[xx t Axx t ] 


£[x t xx t x] 

£[x t Axx t Bx] 


2(S + mm T ) 2 + rn 7 m(X — mm T ) 

+Tr(£)(S + mm T ) 

(S + mm T )(A + A 7 )(£ + mm 7 ) 

+m T Am(S — mm 7 ) + Tr[AS](S + mm 7 ) 
2Tr(£ 2 ) + 4m 7 Em + (Tr(X) + m 7 m) 2 
Tr[AS(B + B t )S] + m T (A + A T )S(B + B T )m 
+(Tr(A£) + m r Am)(Tr(BS) + m T Bm) 


F[a T xb T xc T xd T x] 

= (a T (S + mm T )b)(c T (S + mm T )d) 

+ (a T (S + mm T )c)(b T (X + mm T )d) 

+ (a T (S + mm 7 )d)(b T (S + mm T )c) — 2a 7 mb 7 mc T md 7 m 


E[{ Ax + a)(Bx + b) T (Cx + c)(Dx + d) T ] 

= [A£B t + (Am + a) (Bm + b) T ] [CSD T + (Cm + c) (Dm + d) T ] 
+[ASC t + (Am + a) (Cm + c) T ][BED T + (Bm + b)(Dm + d) T ] 
+ (Bm + b) T (Cm + c)[ASD 7 — (Am + a)(Dm + d) T ] 
+Tr(BSC T )[ASD T + (Am + a)(Dm + d) T ] 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 43 



8.3 Miscellaneous 


8 GAUSSIANS 


E[( Ax + a) T (Bx + b)(Cx + c) T (Dx + d)] 

= Tr[AS(C T D + D T C)EB T ] 

+ [(Am + a.)' 1 B + (Bm + h) 1 A]S[C T (Dm + d) + D 1 (Cm + c)] 

+ [Tr(AEB T ) + (Am + a) T (Bm + b)][Tr(CED T ) + (Cm + c) T (Dm + d)] 

See 7J. 

8.2.5 Moments 


EM 

k 

(384) 

Cov(x) 

= Y.Y.PkPk>Vk+m k mZ - m fc m %,) 

(385) 


k k' 


8.3 Miscellaneous 

8.3.1 Whitening 

Assume x ~ Af(m, E) then 

z = X -1 / 2 (x — m) ~ Af(0, 1) (386) 

Conversely having z ~ Af(0, 1) one can generate data x ~ Af( m, E) by setting 

x = S 1/,2 z + m ~ JV(m, E) (387) 

Note that E 1 / 2 means the matrix which fulfils E^E 1 / 2 = E, and that it exists 
and is unique since E is positive definite. 


8.3.2 The Chi-Square connection 

Assume x ~ Af(m, E) and x to be n dimensional, then 

z = (x — m) T E -1 (x — m) ~ \n (388) 

where \ 2 denotes the Chi square distribution with n degrees of freedom. 


8.3.3 Entropy 

Entropy of a ID-dimensional gaussian 

H (x) = — j AT( m, E) lnA/’(m, E)dx = In ^/det(27rE) + ^ (389) 


8.4 Mixture of Gaussians 

8.4.1 Density 

The variable x is distributed as a mixture of gaussians if it has the density 

K 


p( x ) = J2 Pk ~ 


1 


: exp 


- i (x - m k ) T H k 1 (x - m fc ) 


fc=1 \J det(27rEfc) 

where pk sum to 1 and the E*. all are positive definite. 


(390) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 44 


8.4 Mixture of Gaussians 


8 GAUSSIANS 


8.4.2 Derivatives 

Defining p( s) = PkAT s (n k i s fc) one get 


<91np(s) 

PjJ\f s (p,j,'Ej) d 1 r Kr ( ^ ^ 

E k PkK(n k ,-z k )d Pj ,)] 

(391) 

dpj 


PjAf s (fXj,Hj) 1 

12 k PkAfsil^k, Sfe) P j 

(392) 

<91np(s) 

PjKfaj^j) d nr/., v< m 

\ r ( i v £> ln[pjA/s(Mj5 ^j)\ 

Sfc PfcA/siM/c, 5]fe) 

(393) 

dUj 


_ PjAT s (flj,^j) r^-l, 

£ fc PfcV.(/**,E fc ) L ' ( 

(394) 

<91np(s) 

pjN s {Hj,H,j) d 1 r Kr f ^ ^ 

E fe PfcA4(M fe ,Sfc) ln[ ^ A ^ s( ^'’ 

(395) 



_ PjAfsifXj^j) 1 r ^,-1, w 

E fc ^.(/* fc ,s fc )2 L J + ^ ( Mj)( 

- ^-) T S-f|96) 


But pk and S/,- needs to be constrained. 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 45 



9 SPECIAL MATRICES 


9 Special Matrices 

9.1 Block matrices 

Let A ij denote the ijih block of A. 


9.1.1 Multiplication 

Assuming the dimensions of the blocks matches we have 


r An 

Ai 2 


f B n 

Bi 2 


AnBn + Ai 2 B 2 i 

AnBi 2 + Ai 2 B 22 

A 2 i 

a 22 


B 2 i 

b 22 


A 2 iBn + A 22 B 2 i 

A 2 iBi 2 + A 22 B 22 


9.1.2 The Determinant 

The determinant can be expressed as by the use of 


Ci 

= An 

— Ai 2 A 2 2^ A 2 i 

(397) 

c 2 

= a 22 

— A 2 iA 11 1 Ai 2 

(398) 


as 



det(A 2 2 ) • det(Ci) = det(An) • det(C 2 ) 


9.1.3 The Inverse 

The inverse can be expressed as by the use of 

(399) 

(400) 


Ci — An — Ai 2 A 22 1 A 2 i 
C 2 = A 2 2 — A 2 iA 11 1 Ai 2 


r An 

Ai 2 

1 

1 

cr 1 

-A^AiaCj 1 1 

A 2 i 

a 22 


L -C^ i A 2 iAn i 

c^ J 


A^+Ac/AuC^A^A ^ 1 -C^AiaA ^ 1 

~ A-22 1 A 2 iC 1 1 A 2 ^ r d-A) 2 ^A)2])c7 1 A(|^A^ 


9.1.4 Block diagonal 

For block diagonal matrices we have 



An 

0 

-1 

r (An)- 1 

0 


0 

a 22 


0 

(A 22 ) _i 

( 

r An 

0 

^ = det(An) • det(A 22 ) 


0 

a 22 


(401) 

(402) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 46 



9.2 Discrete Fourier Transform Matrix, The 


9 SPECIAL MATRICES 


9.1.5 Schur complement 

Regard the matrix 



The Schur complement of block An of the matrix above is the matrix (denoted 
C 2 in the text above) 

A 2 2 — A 2 iA 11 1 Ai 2 

The Schur complement of block A 22 of the matrix above is the matrix (denoted 
Ci in the text above) 

An — Ai 2 A 22 1 A 21 

Using the Schur complement, one can rewrite the inverse of a block matrix 



The Schur complement is useful when solving linear systems of the form 



which has the following equation for xi 

(An — A 12 A22 1 A 2 i )xi = bi — Ai 2 A;,^ b 2 

When the appropriate inverses exists, this can be solved for xi which can then 
be inserted in the equation for x 2 to solve for x 2 . 

9.2 Discrete Fourier Transform Matrix, The 

The DFT matrix is an N x N symmetric matrix W at, where the k, nth element 
is given by 

W^n = ( 403 ) 

Thus the discrete Fourier transform (DFT) can be expressed as 

N-l 

X(k) = Y x(n)W^ n . (404) 

n = 0 

Likewise the inverse discrete Fourier transform (IDFT) can be expressed as 

N-l 

x (n) = Y X ^) W N kn - (405) 

fc= o 

The DFT of the vector x = [x(0), x(l), • • • , x(N — 1)] T can be written in matrix 
form as 

X = Wjyx, (406) 

Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 47 



9.3 Hermitian Matrices and skew-Hermitian 


9 SPECIAL MATRICES 


where X = [X(0), X(l), • • • , x(N — 1)] T . The IDFT is similarly given as 


II 

2: i 

X. 

(407) 

Some properties of W jy exist: 



3 

2 i_ 

II 


(408) 

w n w* n = 

NI 

(409) 

W * N = 

w £ 

(410) 

If Wn = , then |25j 



w ™ +N/ 2 = ■ 

~W]y 

(411) 


Notice, the DFT matrix is a Vandermonde Matrix. 

The following important relation between the circulant matrix and the dis- 
crete Fourier transform (DFT) exists 


Tc = w^(l o (W w t))Wj\r, 
where t = [to, ti, • ■ • , t n - i] T is the first row of Tc- 


(412) 


9.3 Hermitian Matrices and skew-Hermitian 

A matrix A £ C mxn is called Hermitian if 

A h = A 


For real valued matrices, Hermitian and symmetric matrices are equivalent. 

A is Hermitian 4=> x H Ax£l, Vx£<C nxl (413) 

A is Hermitian <t=> eig(A) £ R (414) 


Note that 


A = B + iC 


where B, C are hermitian, then 


B = 


A + A h 

2 


C = 


A — A h 

2 i 


9.3.1 Skew-Hermitian 

A matrix A is called skew-hermitian if 

A = -A h 


For real valued matrices, skew-Hernritian and skew-symmetric matrices are 
equivalent. 


A Hermitian 4=> 
A skew-Hermitian 4=> 
A skew-Hermitian => 


iA is skew-hermitian 
x ff Ay = -x H A H y, Vx,y 
eig(A) = *A, A £ R 


(415) 

(416) 

(417) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 48 


9.4 Idempotent Matrices 


9 SPECIAL MATRICES 


9.4 Idempotent Matrices 

A matrix A is idempotent if 

AA = A 

Idempotent matrices A and B, have the following properties 


A" 

= A, forn = 1, 2, 3, ... 

(418) 

I A 

is idempotent 

(419) 

A h 

is idempotent 

(420) 

I — a h 

is idempotent 

(421) 

If AB = BA 

=> AB is idempotent 

(422) 

rank(A) 

= Tr(A) 

(423) 

A(I-A) 

= 0 

(424) 

(I - A) A 

= 0 

(425) 

A~ 

= A 

(426) 

f(sI + tA) 

= (I ~ A)/(s) + Af(s + t) 

(427) 


Note that A — I is not necessarily idempotent. 

9.4.1 Nilpotent 

A matrix A is nilpotent if 

A 2 = 0 

A nilpotent matrix has the following property: 

f(sI + tA) = I/(s) + tAf(s) (428) 


9.4.2 Unipotent 

A matrix A is unipotent if 

AA = I 

A unipotent matrix has the following property: 

f(sI + tA) = [(I + A)f(s + t) + (I — A)f(s — t)\/2 (429) 

9.5 Orthogonal matrices 

If a square matrix Q is orthogonal, if and only if, 

Q t Q = QQ t = I 

and then Q has the following properties 

• Its eigenvalues are placed on the unit circle. 

• Its eigenvectors are unitary, i.e. have length one. 

• The inverse of an orthogonal matrix is orthogonal too. 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 49 



9.6 Positive Definite and Semi-definite Matrices 9 SPECIAL MATRICES 


Basic properties for the orthogonal matrix Q 

Q 1 = Q t 

cr T = q 
QQ t = I 

q t q = i 

det(Q) = ±1 


9.5.1 Ortho-Sym 

A matrix Q + which simultaneously is orthogonal and symmetric is called an 
ortho-sym matrix ]20| . Hereby 


Q+Q+ 

Q+ 


= I 

= Ql 


The powers of an ortho-sym matrix are given by the following rule 


Q+ = 


i + (-i)\ , i + (-i) fc+1 ^ 

2 I+ 2 Q+ 


1 + cos(kn) 1 — cos(fc7r) 


Qi 


(430) 

(431) 

(432) 

(433) 


9.5.2 Ortho-Skew 

A matrix which simultaneously is orthogonal and antisymmetric is called an 
ortho-skew matrix [2D]. Hereby 

Q h Q = I (434) 

Q = -Q? (435) 


The powers of an ortho-skew matrix are given by the following rule 


Q_ = 


+ (- * )* 


. — l- 


- HT 


Q 


7 r 7 r 

= cos(fc — )I + sin(/c— )Q_ 


(436) 

(437) 


9.5.3 Decomposition 

A square matrix A can always be written as a sum of a symmetric A + and an 
antisymmetric matrix A_ 

A = A + + A_ (438) 


9.6 Positive Definite and Semi-definite Matrices 

9.6.1 Definitions 

A matrix A is positive definite if and only if 

x t Ax > 0, Vx ^ 0 (439) 

A matrix A is positive semi-definite if and only if 

x t Ax > 0, Vx (440) 

Note that if A is positive definite, then A is also positive semi-definite. 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 50 


9.6 Positive Definite and Semi-definite Matrices 9 SPECIAL MATRICES 


9.6.2 Eigenvalues 

The following holds with respect to the eigenvalues: 

A pos. def. eig( A+ 9 A " ) > 0 

A pos. semi-def. eig( A+ 2 A ) > 0 

9.6.3 Trace 

The following holds with respect to the trace: 

A pos. def. => Tr(A) > 0 
A pos. semi-def. Tr(A) > 0 


(441) 


(442) 


9.6.4 Inverse 

If A is positive definite, then A is invertible and A -1 is also positive definite. 

9.6.5 Diagonal 

If A is positive definite, then An > 0 ,Vi 


9.6.6 Decomposition I 

The matrix A is positive semi-definite of rank r there exists a matrix B of 
rank r such that A = BB T 

The matrix A is positive definite there exists an invertible matrix B such 
that A = BB 7 


9.6.7 Decomposition II 

Assume A is an n x n positive semi-definite, then there exists an n x r matrix 
B of rank r such that B 7 AB = I. 


9.6.8 Equation with zeros 

Assume A is positive semi-definite, then X 7 AX = 0 =£■ AX = 0 

9.6.9 Rank of product 

Assume A is positive definite, then rank(BAB T ) = rank(B) 

9.6.10 Positive definite property 

If A is n x n positive definite and B is r x n of rank r, then BAB T is positive 
definite. 

9.6.11 Outer Product 

If X is n x r, where n < r and rank(X) = n, then XX T is positive definite. 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 51 



9 SPECIAL MATRICES 


9. 7 Singleentry Matrix, The 


9.6.12 Small pertubations 

If A is positive definite and B is symmetric, then A — tB is positive definite for 
sufficiently small t. 

9.6.13 Hadamard inequality 

If A is a positive definite or semi-definite matrix, then 

det(A) < An 

See [TH pp.477] 

9.6.14 Hadamard product relation 

Assume that P = AA 1 and Q = BB 7 are semi positive definite matrices, it 
then holds that 

P o Q = RR t 

where the columns of R are constructed as follows: r i+ fj-i)N A = a; o bj, for 
i = 1,2, ...,Na and j = 1, 2, ... ,Nb ■ The result is unpublished, but reported by 
Pavel Sakov and Craig Bishop. 


9.7 Singleentry Matrix, The 
9.7.1 Definition 


The single-entry matrix J 1 -' £ R raxn is defined as the matrix which is zero 
everywhere except in the entry (i, j) in which it is 1. In a 4 x 4 example one 
might have 


J 23 


0 0 0 0 
0 0 10 
0 0 0 0 
0 0 0 0 


(443) 


The single-entry matrix is very useful when working with derivatives of expres- 
sions involving matrices. 


9.7.2 Swap and Zeros 

Assume A to be n x m and J u to be m x p 

A3 lj = [ 0 0 ... Aj ... 0 ] (444) 

i.e. an n x p matrix of zeros with the i.th column of A in place of the j.th 
column. Assume A to be n X to and J lJ to be p x n 


0 


JFA = 


0 

Ai 

0 


(445) 


0 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 52 


9. 7 Singleentry Matrix , The 


9 SPECIAL MATRICES 


i.e. an p x m matrix of zeros with the j.th row of A in the placed of the i.tlr 
row. 

9.7.3 Rewriting product of elements 


AkiBji = (AeiejB)ki 

= (AJ«B) fcI 

(446) 

Ai k Bij = (A T e i eJ'B T ) k i 

= (A T J«B T ) fc , 

(447) 

A ik Bji = (A 7 e,e 7 B) fe ; 

= (A T J«B) fc , 

(448) 

A ki B tj ( Ae,e y - B ) /,,/ 

= (AJ«B r ) w 

(449) 


9.7.4 Properties of the Singleentry Matrix 

If i = j 

jyjy = jH = Jt? 

jij • J' J ) / = jU (J«) r J« = J« 

If i ± j 

.VLr :i = o = o 

J'h:J"i T = J" ( J j 7 J' J : J jj 

9.7.5 The Singleentry Matrix in Scalar Expressions 


Assume A is n x m and J is m x n, then 

Tr(AJ y ) = Tr(J <J ‘ A) = (A T ) ij (450) 

Assume A is n x n, J is n x rn and B is in x n, then 

Tr(AJ^B) = (A T B r )jj (451) 

Tr(AJ ji B) = (BA)jj (452) 

lnAJ'M'B, = diag(A T B T ).y (453) 

Assume A is n x n, J iJ is n x m B is m x n, then 

x T AJ ij Bx = (A T xx T B T )jj (454) 

x 7 A.J' .T Bx = diag(A T xx T B r )jj (455) 


9.7.6 Structure Matrices 

The structure matrix is defined by 

dA 

dAij 

If A has no special structure then 


S" = J'-' 


(456) 

(457) 



Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 53 



9.8 Symmetric, Skew-symmetric/ Antisymmetric 9 SPECIAL MATRICES 


9.8 Symmetric, Skew-symmetric/ Antisymmetric 

9.8.1 Symmetric 

The matrix A is said to be symmetric if 


A = A t (459) 

Symmetric matrices have many important properties, e.g. that their eigenvalues 
are real and eigenvectors orthogonal. 

9.8.2 Skew-symmetric / Antisymmetric 

The antisymmetric matrix is also known as the skew symmetric matrix. It has 
the following property from which it is defined 

A = -A t (460) 

Hereby, it can be seen that the antisymmetric matrices always have a zero 
diagonal. The n x n antisymmetric matrices also have the following properties. 

det(A 7 ) = det(— A) = (— l) n det(A) (461) 

— det(A) = det(— A) =0, if n is odd (462) 

The eigenvalues of an antisymmetric matrix are placed on the imaginary axis 
and the eigenvectors are unitary. 


9.8.3 Decomposition 

A square matrix A can always be written as a sum of a symmetric A + and an 
antisymmetric matrix A_ 

A = A + + A (463) 


Such a decomposition could e.g. be 


A = 


A + A t 

2 


A - A t 

2 


= A + + A 


(464) 


9.9 Toeplitz Matrices 


A Toeplitz matrix T is a matrix where the elements of each diagonal is the 
same. In the n x n square case, it has the following structure: 



til 

tl 2 ’ ’ ’ ti n 

t21 



ti2 

tnl 

• ' ' to 1 til 


to t\ ■ ■ ■ t n _ i 

t~ i '• '■ : 

: ' ■ • ■ t\ 

t—(n— 1) * * * t— 1 tg 


(465) 


A Toeplitz matrix is persymmetric. If a matrix is persymmetric (or orthosym- 
metric), it means that the matrix is symmetric about its northeast-southwest 
diagonal (anti-diagonal) |12j . Persymmetric matrices is a larger class of matri- 
ces, since a persymmetric matrix not necessarily has a Toeplitz structure. There 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 54 


9.10 Transition matrices 


9 SPECIAL MATRICES 


are some special cases of Toeplitz matrices. The symmetric Toeplitz matrix is 
given by: 

to t\ • • • t n —\ 


T = 


The circular Toeplitz matrix: 


ti 


t n -i • • • t\ to 


T c = 


to 1 1 


1 1 


tn.— ^ 


tl 

tn - 1 to 


0 


The upper triangular Toeplitz matrix: 

*o t\ * t n — i 

T v = 

0 0 t 0 

and the lower triangular Toeplitz matrix: 

to 0 


ti 


Tr, = 


t— i 


t—(n— 1) ' ' * t— 1 to 


(466) 


(467) 


(468) 


(469) 


9.9.1 Properties of Toeplitz Matrices 

The Toeplitz matrix has some computational advantages. The addition of two 
Toeplitz matrices can be done with 0(n ) flops, multiplication of two Toeplitz 
matrices can be done in 0{n In n) flops. Toeplitz equation systems can be solved 
in 0(n 2 ) flops. The inverse of a positive definite Toeplitz matrix can be found 
in 0(n 2 ) flops too. The inverse of a Toeplitz matrix is persymmetric. The 
product of two lower triangular Toeplitz matrices is a Toeplitz matrix. More 
information on Toeplitz matrices and circulant matrices can be found in H3H3- 


9.10 Transition matrices 

A square matrix P is a transition matrix, also known as stochastic matrix or 
probability matrix, if 


0 < (P)y < 1, E( P )b' = 1 

3 

The transition matrix usually describes the probability of moving from state i 
to j in one step and is closely related to markov processes. Transition matrices 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 55 


9.11 Units, Permutation and Shift 


9 SPECIAL MATRICES 


have the following properties 

Prob[< — >• j in 1 step] = 
Prob[« — ► j in 2 steps] = 
Prob[i — > j in k steps] = 
If all rows are identical => 

qP = 


(P)« 

(470) 

(P 2 )^- 

(471) 

(P% 

(472) 

p n = p 

(473) 

a, a is called invariant 

(474) 


where a is a so-called stationary probability vector, i.e. , 0 < a, < 1 and JT a,; = 

1. 


9.11 Units, Permutation and Shift 

9.11.1 Unit vector 

Let e, £ R" xl be the ith unit vector, i.e. the vector which is zero in all entries 
except the ith at which it is 1. 

9.11.2 Rows and Columns 


i.th row of A = ef A 
j.th column of A = Ae^ 


(475) 

(476) 


9.11.3 Permutations 

Let P be some permutation matrix, e.g. 



' 0 

1 

0 ' 


r t ~ 
e 2 

P = 

1 

0 

0 

= [ e 2 ei e 3 ] = 

T 

e i 


0 

0 

1 


T 

L e 3 J 


For permutation matrices it holds that 

PP T = I 


and that 


AP = Ae 2 Ae 3 Ae 3 


PA = 


e^A 
ef A 
e 3 A 


(477) 


(478) 


(479) 


That is, the first is a matrix which has columns of A but in permuted sequence 
and the second is a matrix which has the rows of A but in the permuted se- 
quence. 


9.11.4 Translation, Shift or Lag Operators 


Let L denote the lag (or ’translation’ or 
example by 


L = 


0 0 
1 0 
0 1 
0 0 


’shift’) operator defined on a 4 x 4 


0 0 
0 0 
0 0 
1 0 


(480) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 56 



9.12 Vandermonde Matrices 


9 SPECIAL MATRICES 


i.e. a matrix of zeros with one on the sub-diagonal, (L),;j = Sij. |_i- With some 
signal x t for t = 1, N , the n.th power of the lag operator shifts the indices, 

i.e. 


r 0 for t = 1, .., n 
l Xt-n for t = n+l,...,N 


(481) 


A related but slightly different matrix is the ’recurrent shifted’ operator defined 
on a 4x4 example by 


L = 


0 0 0 1 
10 0 0 
0 10 0 
0 0 10 


(482) 


i.e. a matrix defined by (L )ij = Vj+i + ■ On a signal x it has the 

effect 

(L"x) t = x t ', t'=[(t — n) mod IV] + 1 (483) 


That is, L is like the shift operator L except that it ’wraps’ the signal as if it 
was periodic and shifted (substituting the zeros with the rear end of the signal) . 
Note that L is invertible and orthogonal, i.e. 


L - 1 = L t 


(484) 


9.12 Vandermonde Matrices 

A Vandermonde matrix has the form |15j 



' 1 

Vi 

v\ ■ 

• vr 1 


1 

V2 

vl ■ 

■ v, r 1 

V = 

_ 1 

V n 

vl • 

... i 

i e e 


(485) 


The transpose of V is also said to a Vandermonde matrix. The determinant is 
given by 

det V = — Vj) (486) 

i>j 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 57 


10 FUNCTIONS AND OPERATORS 


10 Functions and Operators 

10.1 Functions and Series 

10.1.1 Finite Series 


(X ra -I)(X-I)" 1 = I + X + X 2 + ... +X"" 1 


(487) 


10.1.2 Taylor Expansion of Scalar Function 

Consider some scalar function /(x) which takes the vector x as an argument. 
This we can Taylor expand around xq 


10.1.3 Matrix Functions by Infinite Series 

As for analytical functions in one dimension, one can define a matrix function 
for square matrices X by an infinite series 


assuming the limit exists and is finite. If the coefficients c n fulfils c nX n < oo, 
then one can prove that the above series exists and is finite, see 1|. Thus for 
any analytical function f(x) there exists a corresponding matrix function f(x) 
constructed by the Taylor expansion. Using this one can prove the following 
results: 

1) A matrix A is a zero of its own characteristic polynomium [I] : 


/( x ) - /( x o) + g( x o) T ( x - x o) + ( x - x o) T H(x 0 )(x - x 0 ) (488) 


where 



OO 



(489) 




n 


2) If A is square it holds that [T] 

A = UBU -1 => f(A) = Uf(B)U _1 

3) A useful fact when using power series is that 


(491) 


A" — ► Oforn — > oo if |A| < 1 


(492) 


10.1.4 Identity and commutations 

It holds for an analytical matrix function f(X) that 


f(AB)A = Af(BA) 


(493) 


see |B.1.2| for a proof. 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 58 


10.2 Kronecker and Vec Operator 10 FUNCTIONS AND OPERATORS 


10.1.5 Exponential Matrix Function 

In analogy to the ordinary scalar exponential function, one can define exponen- 
tial and logarithmic matrix functions: 


e 


A 


e 


-A 


e 


tA 


In (I + A) 


°o 

Y-A n = l + A + -A 2 + ... 

^ n\ 2 

n— 0 
°o 

J2-(-irA n =I-A+-A 2 -... 

n = o 


oo 1 | 

y -(tA) n = i+tA+ -t 2 a 2 + ... 
^ n! 2 

n— 0 


E 


(-i)" 

n 


= A 




(494) 

(495) 

(496) 

(497) 


Some of the properties of the exponential function are [lj 


e A e B 

= e A+B if 

AB = BA 

(498) 

(e A )" 1 

= e" A 


(499) 

— e tA 
dt 

= Ae tA = e* A A, 

lei 

(500) 


= Tr(Ae tA ) 


(501) 

det(e A ) 

= e Tr < A) 


(502) 


10.1.6 Trigonometric Functions 


sin(A) 

cos(A) 


^ (-l)«A 2n+1 1 a3 1 5 

y = A —A' 1 + —A 5 - ... 


n— 0 


(2n+ 1)! 


3! 


5! 


E ( ". 1 l , ‘ A2n —i - 1a 2 + ly - ... 


n— 0 


(2 n)! 


2 ! 


4!' 


(503) 

(504) 


10.2 Kronecker and Vec Operator 

10.2.1 The Kronecker Product 


The Kronecker product of an m x n matrix A and anrxg matrix B, is an 
mr x nq matrix, A <g> B defined as 


AuB 

Ai 2 B 

A\ n 

B 

A 2 iB 

a 22 b 

A'ln 

B 


A m2 B .. 

A R 

•• - rL mn- LJ 


(505) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 59 


10.2 Kronecker and Vec Operator 10 FUNCTIONS AND OPERATORS 


The Kronecker product has the following properties (see nsn 


AO (B + C) 

= 

A O B + A O C 

(506) 

A O B 

* 

B O A in general 

(507) 

AO (BO C) 

= 

(A O B) O C 

(508) 

(a A K O QbB) 

= 

a A a B (A O B) 

(509) 

(AoB) T 

= 

a t ob t 

(510) 

(A O B)(C O D) 

= 

AC O BD 

(511) 

(A OB)" 1 

= 

A" 1 OB” 1 

(512) 

(A O B) + 

= 

A+ OB+ 

(513) 

rank (A O B) 

= 

rank(A)rank(B) 

(514) 

Tr(A O B) 

= 

Tr(A)T4(B) = TV(A a O A b ) 

(515) 

det(A O B) 

= 

det(A) rank ( B) det(B) rank ( A ) 

(516) 

{eig(AoB)} 

= 

{eig(B O A)} if A, B are square 

(517) 

{eig(AoB)} 

= 

{eig(A)eig(B) T } 

if A,B are symmetric and square 

(518) 

eig(A O B) 

= 

eig(A) O eig(B) 

(519) 


Where {A;} denotes the set of values A,, that is, the values in no particular 
order or structure, and denotes the diagonal matrix with the eigenvalues of 

A. 


10.2.2 The Vec Operator 

The vec-operator applied on a matrix A stacks the columns into a vector, i.e. 
for a 2 x 2 matrix 


A = 


An A12 

A21 A22 


vec(A) = 


An 

A21 

A-12 

A22 


include (see Pd]) 

= (B t O A)vec(X) 


Properties of the vec-operator 

vec(AXB) 
Tr(A T B) 
vec(A + B) 
vec(aA) 

a T XBX T c 


= vec(A) T vec(B) 

= vec(A) + vec(B) 

= a ■ vec(A) 

= vec(X) T (B O ca T )vec(X) 


See |B.1.1| for a proof for Eq. |524| 


(520) 

(521) 

(522) 

(523) 

(524) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 60 


10.3 Vector Norms 


10 FUNCTIONS AND OPERATORS 


10.3 Vector Norms 

10.3.1 Examples 


ll x lli = 

11 * 11 ! = 
l|x||p = 

I | x | |oo 

Further reading in e.g. [12] p. 52] 

10.4 Matrix Norms 

10.4.1 Definitions 

A matrix norm is a mapping which fulfils 

> 0 

= 0 A = 0 
= |c|||A||, ceK 
< I|A|| + !|b|| 


l|A| 
1 1 A| 
||cA| 
IA + BI 


53 w 


x"x 


53 N 

, i 

max \xA 


i/p 


(525) 

(526) 

(527) 

(528) 


(529) 

(530) 

(531) 

(532) 


10.4.2 Induced Norm or Operator Norm 

An induced norm is a matrix norm induced by a vector norm by the following 

1 1 A|| = sup{||Ax|| | ||x|| = 1} (533) 

where 1 1 • 1 1 on the left side is the induced matrix norm, while 1 1 • 1 1 on the right 
side denotes the vector norm. For induced norms it holds that 



Hill = 1 

1 1 Axj j < 1 1 A 1 1 • ||x||, 

for all A, x 

(534) 

(535) 


l|AB|| < A • B , 

for all A, B 

(536) 

Examples 

1 1 A 1 1 1 = max y ] |Ajj| 


(537) 

A 2 

= ^/maxeig(A ff A) 


(538) 

II A II 

1 i-^-i ip 

= (..max ||Ax|| p ) 1/p 


(539) 

II A II 

1 1 -^*-1 1 oo 

IMI P =i 

= max^ \Ajj\ 


(540) 

A F 

= \Ajj\ 2 = ^/Tr(AA^) (Frobenius) 

V 

(541) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 61 


10.5 Rank 


10 FUNCTIONS AND OPERATORS 


1 1 A 1 1 max — max | Aij | 

1 1 A 1 1 kf = ||sing(A)||i (Ky Fan) 

where sing(A) is the vector of singular values of the matrix A. 


(542) 

(543) 


10.4.4 Inequalities 

E. H. Rasmussen has in yet unpublished material derived and collected the 
following inequalities. They are collected in a table as below, assuming A is an 
to x n, and d = rank(A) 



I A 1 1 max 

A i 

l|A||oo 

l|A|| 2 

1 1 A| |f 

A KF 

1 1 A 1 1 max 


1 

1 

1 

1 

1 

A i 

m 


m 

y/rh 

yfm 

y/m 

A oo 

n 

n 


y/n 

y/n 

Un 

l|A|| 2 

\Jmn 

\/n 

yjm 


1 

1 

IIAIIf 

yjmn 

y/n 

y/m 

Vd 


1 

1 1 a |kf 

\/mnd 

\fnd 

\Jmd 

d 

y/d 



which are to be read as, e.g. 


||A]] 2 < yfm- HAIU 


(544) 


10.4.5 Condition Number 

The 2-norm of A equals yj (max(eig(A T A))) [12] p.57] . For a symmetric, pos- 
itive definite matrix, this reduces to max(eig(A)) The condition number based 
on the 2-norm thus reduces to 


II A|| 2 || A 1 \\ 2 = max(eig(A))max(eig(A *)) 


max(eig(A)) 

min(eig(A)) 


(545) 


10.5 Rank 

10.5.1 Sylvester’s Inequality 

If A is to x n and B is n x r, then 

rank(A) + rank(B) — n < rank(AB) < min{rank(A), rank(B)} (546) 


10.6 Integral Involving Dirac Delta Functions 

Assuming A to be square, then 

J P(s)(5(x - As)ds = det | A ^ p(AA lx ) 
Assuming A to be ’’underdetermined”, i.e. ’’tall”, then 


[ p(s)5(x — As)ds = / 7skm P{A+x) ifx 
J \ 0 else 


= AA+x 

elsewhere 


See [Sj. 


(547) 


(548) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 62 


10.7 Miscellaneous 


10 FUNCTIONS AND OPERATORS 


10.7 Miscellaneous 

For any A it holds that 

rank(A) = rank(A r ) = rank(AA 7 ) = rank(A 7 A) (549) 

It holds that 

A is positive definite <t=> 3B invertible, such that A = BB J (550) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 63 



A ONE-DIMENSIONAL RESULTS 


A One-dimensional Results 


A.l Gaussian 

A. 1.1 Density 


p(x) = 


Vl^a 2 


exp - 


A. 1.2 Normalization 


(s-M) 2 

e 2^2 


e -(ax 2 +bx+c) dx 

p 

e C2X 2 + C1X+C0 dx 


( x - nf 

2a 2 


= Vl 


'KG 


exp 


b 2 — 4acl 


-c 2 


exp 


4 a J 

cf - 4 c 2 c 0 
-4c 2 


A. 1.3 Derivatives 


d p(a 


dfi 

d\np{x 


dp 

dp(x) 

da 

d\np{x) 

8a 




) (x - fl) 


a* 

p(x)~ 

a 


(x- nf 


-i 


(x~n) 


- 1 


(551) 

(552) 

(553) 

(554) 

(555) 

(556) 

(557) 

(558) 


A. 1.4 Completing the Squares 


c 2 cc 2 + c\x + Co = — a(x — b) 2 + w 
-a = c 2 


, 1 ci 1 c \ 

b =2c, "=4^ +C ” 


or 


c-ix 2 + cix + c 0 = — — Ax — p) 2 + d 
la 2 


d = 


-Ci 
2 c 2 


a 2 = 


-1 

2c 2 


7 c i 

d = c °-^ 


A. 1.5 Moments 

If the density is expressed by 


P(x) = 


1 


Via to 2 


exp 


(s - V) 

2 a 2 


21 


or p{x) = C exp(c 2 ar + c\x) (559) 


then the first few basic moments are 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 64 



A. 2 One Dimensional Mixture of GaussiAnsONE-DIMENSIONAL RESULTS 


{x) 

= P 

-ci 

2c 2 

{x 2 ) 

= a 2 + p 2 

= =A + (^l 
2 c 2 V 2c2 . 

(X 3 ) 

= 3(7 2 /i + p 3 

— C1 

“ (2c 2 ) 2 d 

(x 4 ) 

= p 4 + 6p 2 a 2 + 3cr 4 

- fe) + 6 

and the central moments are 



{{x~ p)) 

= 0 = 0 


H 

to 

= a 2 = 


((x-p) 3 ) 

= 0 = 0 


(( x-p ) 4 ) 

CO 

II 

b 

CO 

II 


A kind of pseudo-moments (un-normalized integrals) can easily be derived as 

[ expire 2 + c\x)x n dx = Z(x n ) = . \ exp C ] ( x n ) (560 

J V ^ c 2 L — 4 c 2 J 

^Frorn the un-centralized moments one can derive other entities like 

(x 2 )-(x ) 2 = a 2 = ^ 

(x 3 )~(x 2 )(x) = 2a 2 p = ^ 

(x 4 )-(x 2 ) 2 = 2a 4 + 4 p 2 a 2 = l 1 ~ 


A. 2 One Dimensional Mixture of Gaussians 


A. 2.1 Density and Normalization 


p( s ) = 


1 (s - Pk) 2 

:exp h^r- 


A.2.2 Moments 

A useful fact of MoG, is that 


(x n ) = Y J Pk{x n )k 


where (•)*. denotes average with respect to the k.th component. We can calculate 
the first four moments from the densities 


P(x) = 


1 (x - Pk)‘ 
'2 *1 


p{x) = pkC k exp [c k 2 X 2 + Ckix 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 65 



B PROOFS AND DETAILS 


(x) = J2k PkPk 

(x 2 ) = E k p k K + 4) 

( X 3 ) = J2k Pki^lPk + pi) 

(a; 4 ) = Efc Pk(pi + QpWk + 3 at) 

If all the gaussians are centered, i.e. p k 
(x) = 0 

(x 2 ) = ZkPkvl 

{x 3 } = 0 

(z 4 ) = E k Pk^k 


= Efc Pk 
= Efc Pk 
= Efc Pk 
= Efc Pk 


~ Cfcj 

2Cfc2 



= 0 for all k, then 


= 0 


— Efc Pk 
= o 


2Cfc2 


— Pk3 


1 2 

-1 


2Cfc2 


^From the un-centralized moments one can derive other entities like 

{x 2 ) - {x) 2 = J2k,k> PkPk' [pl + CTfc - PkPk'} 

(x 3 ) - {x 2 ){x) = Efc,fc' PkPk ' [3 afrik + Pk - (o-fc + Pk)Pk'] 

(a; 4 ) - (x 2 ) 2 = Efc.fc' PkPk > [pi + 6 p 2 k a 2 k + 3 a\ - {a 2 + p 2 k ){a 2 k , + p 2 k ,)\ 


A. 2. 3 Derivatives 


Defining p(s) = JE PkX s (p k , <x k ) we § e t f° r a parameter Oj of the j.th compo- 
nent 

d lnp(s) _ pjj\f s (pj,<7 2 ) 9 In {pjATsiPj^j)) 


that is, 


d0 3 


Efc PkAf s (pk, cr 2 ) 


dOj 


(565) 


ainp(s) 

d Pj 

5 In p(s) 
dPj 

a In p(s) 
daj 


PjAf s (pj,a 2 ) l 

EfcPfcV s (/Xfc,^) pj 

PjXjpj.a 2 } (s-p,j) 

Efc PkX s (pk, <X k ) (T 2 

PjX s (pj , a 2 ) 1 (s- Pj) 2 

EfcPfcV s (/ifc,o|) Oj a 2 


(566) 

(567) 

(568) 


Note that p k must be constrained to be proper ratios 
Pj = e Tj / J2k eTk > we obtain 

ainp(s) ^ d In p(s) dpi dpi 

-gk-^L-gjrak- w, 


Defining the ratios by 
Pi($ij ~ Pj) (569) 


B Proofs and Details 


B.l Misc Proofs 


B.1.1 Proof of Equation 524 


The following proof is work of Florian Roemer. Note the the vectors and ma- 
trices below can be complex and the notation X. H is used for transpose and 
conjugated, while X T is only transpose of the complex matrix. 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 66 


B.l Misc Proofs 


B PROOFS AND DETAILS 


Define the row vector y = a ff XB and the column vector z = X. H c. Then 
a T XBX T c = yz = z T y T 

Note that y can be rewritten as vec(y) T which is the same as 
vec(conj(y)) ff = vec(a T conj(X)conj(B)) ff 


where ”conj” means complex conjugated. Applying the vec rule for linear forms 
Eq |520[ we get 


y = (B 8 a vec(conj(X)) = vec(X) (B 8 conj(a)) 

where we have also used the rule for transpose of Kronecker products. For y T 
this yields (B r 8 a ff )vec(X). Similarly we ca n rew rite z which is the same as 
vec(z T ) = vec(c T conj(X)). Applying again Eq 


520 


we get 


z = (I® c T )vec(conj(X)) 


where I is the identity matrix. For z T we obtain vec(X)(I <8 c). Finally, the 
original expression is z T y T which now takes the form 

vec(X) ff (I (g> c)(B t ® a ff )vec(X) 


the final step is to apply the rule for products of Kronecker products and by 
that combine the Kronecker products. This gives 

vec(X) H (B T 8 ca^)vec(X) 

which is the desired result. 


B.1.2 Proof of Equation |493] 

For any analytical function f(X) of a matrix argument X, it holds that 


f(AB)A 


E^AB)" A 

\n= 0 / 

oo 

E<=n(AB)’ l A 

n — 0 
co 

^ c„A(BA)" 


n — 0 


A^c„(BAr 


n— 0 


Af(BA) 


B.l. 3 Proof of Equation 91 


Essentially we need to calculate 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 67 


B.l Misc Proofs 


B PROOFS AND DETAILS 


d(X n ) k i 

dX i:j 


dXa 


E 


X-k,U\ X-Ui ,U 2 


...X, 


Un — l,l 




$k,i&Ui ,j X u i ? n 2 • ••X Un _ 1 ,/ 

H - ^-k,Ui ,i^U2 ,j •'•Xun—hl 


H“ Xk,U\ X-Ui : U2 — 

n — 1 

= £( xr ) w ( x "- 1 - r ) iI 
r = 0 
n— 1 

= ^(X r J* J X" _1_r )fe; 

r = 0 

Using the properties of the single entry matrix found in Sec. |9.7.4[ the result 
follows easily. 


B.1.4 Details on Eq. |571| 


5det(X ff AX) 


det(X H AX)Tr[(X H AX)- 1 a(X H AX)] 
det(X H AX)Tr[(X H AX)- 1 (a(X ff )AX + X H d(AX))] 
det(X ff AX)(Tr[(X ff AX)- 1 a(X ff )AX] 

+Tr[(X H AX) _1 X ff 5(AX)]) 
det(X H AX)(Tr[AX(X ff AX)" 1 a(X ff )] 
+Tr[(X H AX)- 1 X ff A9(X)]) 


First, the derivative is found with respect to the real part of X 


<9det(X H AX) 

dlftX 


det(X«AX)( ^ AX < X ;^ la < X ">l 

Tr[(X ff AX^X^ Ad(X)] \ 

+ 95RX ) 

det(X H AX) (AX(X ff AX) _1 + ((X H AX)- 1 X if A) T ) 


Through the calculations, ( |100| ) and ( |240| were used. In addition, by use of 
(|241|) , the derivative is found with respect to the imaginary part of X 


,ddet(X H AX) 

* MX 


, , vH A v ,/Tr[AX(X"AX)- 
■ det(X^AX)( 33X ’ 

Tr[(X H AX)~ 1 X ff A9(X)] 


9(X ff )] 


+ - 


asx 


) 


det(X^AX) (AX(X h AX) _ 1 - ((X H AX)" 1 X if A) 


Hence, derivative yields 

<9det(X H AX) 

dX 


1 / d det(X H AX) d det(X H AX) \ 

2 v aix * dXX / 

det(X H AX)((X ff AX)- 1 X H A) T 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 68 


B.l Misc Proofs 


B PROOFS AND DETAILS 


and the complex conjugate derivative yields 


<9det(X H AX) _ 1 /<9det(X ff AX) .<9det(X ff AX)\ 

dX* “ 2V <9!RX + * &AX ) 

= det(X H AX)AX(X ff AX)" 1 


Notice, for real X, A, the sum 
Similar calculations yield 


of ((249]) and fl250] ) 


is reduced to ( 54 ) . 


<9det(XAX ff ) 

dX 


1 / d det(XAX ff ) . d det(XAX H ) \ 

2 V d!RX * &AX / 

det(XAX ff )(AX ff (XAX H )- 1 ) T 


and 


<9det(XAX ff ) 
dX * 


1 /9det(XAX J? ) ,3det(XAX ff )\ 

2 V 9ix aox / 

det(XAX ff )(XAX H )- 1 XA 


(570) 


(571) 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 69 


REFERENCES 


REFERENCES 


References 

[1] Karl Gustav Andersson and Lars-Christer Boiers. Ordinaera differentialek- 
vationer. Studenterlitteratur, 1992. 

[2] Jorn Anemliller, Terrence J. Sejnowski, and Scott Makeig. Complex inde- 
pendent component analysis of frequency-domain electroenceplralographic 
data. Neural Networks , 16(9):1311— 1323, November 2003. 

[3] S. Barnet. Matrices. Methods and Applications. Oxford Applied Mathe- 
matics and Computin Science Series. Clarendon Press, 1990. 

[4] Christopher Bishop. Neural Networks for Pattern Recognition. Oxford 
University Press, 1995. 

[5] Robert J. Boik. Lecture notes: Statistics 550. Online, April 22 2002. Notes. 

[6] D. H. Brandwood. A complex gradient operator and its application in 
adaptive array theory. IEE Proceedings , 130(1):11— 16, February 1983. PTS. 
F and H. 

[7] M. Brookes. Matrix Reference Manual, 2004. Website May 20, 2004. 

[8] Contradsen K., En introduktion til statistik , IMM lecture notes, 1984. 

[9] Mads Dyrholm. Some matrix results, 2004. Website August 23, 2004. 

[10] Nielsen F. A., Formula , Neuro Research Unit and Technical university of 
Denmark, 2002. 

[11] Gelman A. B., J. S. Carlin, H. S. Stern, D. B. Rubin, Bayesian Data 
Analysis, Chapman and Hall / CRC, 1995. 

[12] Gene H. Golub and Charles F. van Loan. Matrix Computations. The Johns 
Hopkins University Press, Baltimore, 3rd edition, 1996. 

[13] Robert M. Gray. Toeplitz and circulant matrices: A review. Technical 
report, Information Systems Laboratory, Department of Electrical Engi- 
neering, Stanford University, Stanford, California 94305, August 2002. 

[14] Simon Haykin. Adaptive Filter Theory. Prentice Hall, Upper Saddle River, 
NJ, 4th edition, 2002. 

[15] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge 
University Press, 1985. 

[16] Mardia K. V., J.T. Kent and J.M. Bibby, Multivariate Analysis, Academic 
Press Ltd., 1979. 

[17] Mathpages on ’’Eigenvalue Problems and Matrix Invariants”, 


http : / /www .mathpages . com/home/kmathl 28 .htm 

[18] Carl D. Meyer. Generalized inversion of modified matrices. SIAM Journal 
of Applied Mathematics, 24(3):315-323, May 1973. 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 70 



REFERENCES 


REFERENCES 


[19] Thomas P. Minka. Old and new matrix algebra useful for statistics, De- 
cember 2000. Notes. 

[20] Daniele Mortari Ortho-Skew and Ortho-Sym Matrix Trigonometry John 
Lee Junkins Astrodynamics Symposium , AAS 03-265, May 2003. Texas 
A&M University, College Station, TX 

[21] L. Parra and C. Spence. Convolutive blind separation of non-stationary 
sources. In IEEE Transactions Speech and Audio Processing , pages 320- 
327, May 2000. 

[22] Kaare Brandt Petersen, Jiucang Hao, and Te-Won Lee. Generative and 
filtering approaches for overcomplete representations. Neural Information 
Processing - Letters and Reviews, vol. 8(1), 2005. 

[23] John G. Proakis and Dimitris G. Manolakis. Digital Signal Processing. 
Prentice-Hall, 1996. 

[24] Laurent Schwartz. Cours d’ Analyse, volume II. Hermann, Paris, 1967. As 
referenced in [14] . 

[25] Shayle R. Searle. Matrix Algebra Useful for Statistics. John Wiley and 
Sons, 1982. 

[26] G. Seber and A. Lee. Linear Regression Analysis. John Wiley and Sons, 
2002 . 

[27] S. M. Selby. Standard Mathematical Tables. CRC Press, 1974. 

[28] Inna Stainvas. Matrix algebra in differential calculus. Neural Computing 
Research Group, Information Engeneering, Aston University, UK, August 
2002. Notes. 

[29] P. P. Vaidyanathan. Multirate Systems and Filter Banks. Prentice Hall, 
1993. 

[30] Max Welling. The Kalman Filter. Lecture Note. 

[31] Wikipedia on minors: ’’Minor (linear algebra)” , 

http : / /en . Wikipedia . org/wiki/Minor_ (linear_algebra) 

[32] Zhaoshui He, Shengli Xie, et al, ’’Convolutive blind source separation in 
frequency domain based on sparse representation” , IEEE Transactions on 
Audio, Speech and Language Processing, vol. 15(5) :1551-1563, July 2007. 

[33] Karim T. Abou-Moustafa On Derivatives of Eigenvalues and Eigenvectors 
of the Generalized Eigenvalue Problem. McGill Technical Report, October 
2010 . 

[34] Mohammad Emtiyaz Khan Updating Inverse of a Matrix When a Column 
is Added/Removed. Emt CS,UBC February 27, 2008 


Petersen & Pedersen, The Matrix Cookbook, Version: November 15, 2012, Page 71 


Index 


Anti-symmetric, [54] 
Block matrix, [46] 


Chain rule, [15] 
Cholesky-decomposition, [32] 


Co-kurtosis, 34 


Co-skewness, 34 


Condition number, [62] 
Cramers Rule, [29] 


Derivative of a complex matrix, |24| 
Derivative of a determinant, [8] 
Derivative of a trace, [j~2] 

Derivative of an inverse, [9] 
Derivative of symmetric matrix, [15] 
Derivatives of Toeplitz matrix, [Tg] 
Diriclilet distribution, [37] 


Eigenvalues, [30] 

Eigenvectors, [30] 

Exponential Matrix Function, 59 


Gaussian, conditional, 40 
Gaussian, entropy, [44] 

Gaussian, linear combination, [41] 
Gaussian, marginal, [40] 

Gaussian, product of densities, [42 


Generalized inverse, 21 


Hadamard inequality, 52 
Hermitian, |48] 


Idempotent, [49] 


Kronecker product, |59| 


LDL decomposition, [33] 
LDM-decomposition, [33] 
Linear regression, [28] 

LU decomposition, |32] 
Lyapunov Equation, [30] 


Moore- Penrose inverse, [2T] 
Multinomial distribution, [37] 


Nilpotent, [49] 
Norm of a matrix 
Norm of a vector, 



Normal-Inverse Gamma distribution, |37| 
Normal-Inverse Wishart distribution, [39] 

Orthogonal, [49] 


Power series of matrices, 
Probability matrix, |55| 
Pseudo- inverse, [21] 


58 


Schur complement, 41 


Single entry matrix, 52 


47 


Singular Valued Decomposition (SVD), 

M 

Skew-Hermitian, [48] 

Skew-symmetric, |54| 

Stochastic matrix7[55] 

Student-t, [37] 

Sylvester’s Inequality, |62] 

Symmetric, [54] 


Taylor expansion, [58] 
Toeplitz matrix, |54| 
Transition matrix, [55] 
Trigonometric functions, 59 


Unipotent, [49] 


Vandermonde matrix, [57] 
Vec operator, [59] [60] 


Wishart distribution, [38] 
Woodbury identity, [18] 


72